abacusai.api_class ================== .. py:module:: abacusai.api_class Submodules ---------- .. toctree:: :maxdepth: 1 /autoapi/abacusai/api_class/abstract/index /autoapi/abacusai/api_class/ai_agents/index /autoapi/abacusai/api_class/ai_chat/index /autoapi/abacusai/api_class/batch_prediction/index /autoapi/abacusai/api_class/blob_input/index /autoapi/abacusai/api_class/connectors/index /autoapi/abacusai/api_class/dataset/index /autoapi/abacusai/api_class/dataset_application_connector/index /autoapi/abacusai/api_class/deployment/index /autoapi/abacusai/api_class/document_retriever/index /autoapi/abacusai/api_class/enums/index /autoapi/abacusai/api_class/feature_group/index /autoapi/abacusai/api_class/model/index /autoapi/abacusai/api_class/monitor/index /autoapi/abacusai/api_class/monitor_alert/index /autoapi/abacusai/api_class/project/index /autoapi/abacusai/api_class/python_functions/index /autoapi/abacusai/api_class/refresh/index /autoapi/abacusai/api_class/segments/index Attributes ---------- .. autoapisummary:: abacusai.api_class.DocumentRetrieverConfig abacusai.api_class.Segment Classes ------- .. autoapisummary:: abacusai.api_class.ApiClass abacusai.api_class.FieldDescriptor abacusai.api_class.JSONSchema abacusai.api_class.WorkflowNodeInputMapping abacusai.api_class.WorkflowNodeInputSchema abacusai.api_class.WorkflowNodeOutputMapping abacusai.api_class.WorkflowNodeOutputSchema abacusai.api_class.TriggerConfig abacusai.api_class.WorkflowGraphNode abacusai.api_class.WorkflowGraphEdge abacusai.api_class.WorkflowGraph abacusai.api_class.AgentConversationMessage abacusai.api_class.WorkflowNodeTemplateConfig abacusai.api_class.WorkflowNodeTemplateInput abacusai.api_class.WorkflowNodeTemplateOutput abacusai.api_class.ApiClass abacusai.api_class.HotkeyPrompt abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.BatchPredictionArgs abacusai.api_class.ForecastingBatchPredictionArgs abacusai.api_class.NamedEntityExtractionBatchPredictionArgs abacusai.api_class.PersonalizationBatchPredictionArgs abacusai.api_class.PredictiveModelingBatchPredictionArgs abacusai.api_class.PretrainedModelsBatchPredictionArgs abacusai.api_class.SentenceBoundaryDetectionBatchPredictionArgs abacusai.api_class.ThemeAnalysisBatchPredictionArgs abacusai.api_class.ChatLLMBatchPredictionArgs abacusai.api_class.TrainablePlugAndPlayBatchPredictionArgs abacusai.api_class.AIAgentBatchPredictionArgs abacusai.api_class._BatchPredictionArgsFactory abacusai.api_class.ApiClass abacusai.api_class.Blob abacusai.api_class.BlobInput abacusai.api_class._ApiClassFactory abacusai.api_class.DatasetConfig abacusai.api_class.StreamingConnectorDatasetConfig abacusai.api_class.KafkaDatasetConfig abacusai.api_class._StreamingConnectorDatasetConfigFactory abacusai.api_class.ApiClass abacusai.api_class.DocumentType abacusai.api_class.OcrMode abacusai.api_class.DatasetConfig abacusai.api_class.ParsingConfig abacusai.api_class.DocumentProcessingConfig abacusai.api_class.DatasetDocumentProcessingConfig abacusai.api_class.IncrementalDatabaseConnectorConfig abacusai.api_class.AttachmentParsingConfig abacusai.api_class._ApiClassFactory abacusai.api_class.DatasetConfig abacusai.api_class.DatasetDocumentProcessingConfig abacusai.api_class.ApplicationConnectorDatasetConfig abacusai.api_class.ConfluenceDatasetConfig abacusai.api_class.BoxDatasetConfig abacusai.api_class.GoogleAnalyticsDatasetConfig abacusai.api_class.GoogleDriveDatasetConfig abacusai.api_class.JiraDatasetConfig abacusai.api_class.OneDriveDatasetConfig abacusai.api_class.SharepointDatasetConfig abacusai.api_class.ZendeskDatasetConfig abacusai.api_class.AbacusUsageMetricsDatasetConfig abacusai.api_class.TeamsScraperDatasetConfig abacusai.api_class.FreshserviceDatasetConfig abacusai.api_class.SftpDatasetConfig abacusai.api_class._ApplicationConnectorDatasetConfigFactory abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.PredictionArguments abacusai.api_class.OptimizationPredictionArguments abacusai.api_class.TimeseriesAnomalyPredictionArguments abacusai.api_class.ChatLLMPredictionArguments abacusai.api_class.RegressionPredictionArguments abacusai.api_class.ForecastingPredictionArguments abacusai.api_class.CumulativeForecastingPredictionArguments abacusai.api_class.NaturalLanguageSearchPredictionArguments abacusai.api_class.FeatureStorePredictionArguments abacusai.api_class._PredictionArgumentsFactory abacusai.api_class.ApiClass abacusai.api_class.VectorStoreTextEncoder abacusai.api_class.VectorStoreConfig abacusai.api_class.ApiEnum abacusai.api_class.ProblemType abacusai.api_class.RegressionObjective abacusai.api_class.RegressionTreeHPOMode abacusai.api_class.PartialDependenceAnalysis abacusai.api_class.RegressionAugmentationStrategy abacusai.api_class.RegressionTargetTransform abacusai.api_class.RegressionTypeOfSplit abacusai.api_class.RegressionTimeSplitMethod abacusai.api_class.RegressionLossFunction abacusai.api_class.ExplainerType abacusai.api_class.SamplingMethodType abacusai.api_class.MergeMode abacusai.api_class.OperatorType abacusai.api_class.MarkdownOperatorInputType abacusai.api_class.FillLogic abacusai.api_class.BatchSize abacusai.api_class.HolidayCalendars abacusai.api_class.FileFormat abacusai.api_class.ExperimentationMode abacusai.api_class.PersonalizationTrainingMode abacusai.api_class.PersonalizationObjective abacusai.api_class.ForecastingObjective abacusai.api_class.ForecastingFrequency abacusai.api_class.ForecastingDataSplitType abacusai.api_class.ForecastingLossFunction abacusai.api_class.ForecastingLocalScaling abacusai.api_class.ForecastingFillMethod abacusai.api_class.ForecastingQuanitlesExtensionMethod abacusai.api_class.TimeseriesAnomalyDataSplitType abacusai.api_class.TimeseriesAnomalyTypeOfAnomaly abacusai.api_class.TimeseriesAnomalyUseHeuristic abacusai.api_class.NERObjective abacusai.api_class.NERModelType abacusai.api_class.NLPDocumentFormat abacusai.api_class.SentimentType abacusai.api_class.ClusteringImputationMethod abacusai.api_class.ConnectorType abacusai.api_class.ApplicationConnectorType abacusai.api_class.StreamingConnectorType abacusai.api_class.PythonFunctionArgumentType abacusai.api_class.PythonFunctionOutputArgumentType abacusai.api_class.VectorStoreTextEncoder abacusai.api_class.LLMName abacusai.api_class.MonitorAlertType abacusai.api_class.FeatureDriftType abacusai.api_class.DataIntegrityViolationType abacusai.api_class.BiasType abacusai.api_class.AlertActionType abacusai.api_class.PythonFunctionType abacusai.api_class.EvalArtifactType abacusai.api_class.FieldDescriptorType abacusai.api_class.WorkflowNodeInputType abacusai.api_class.WorkflowNodeOutputType abacusai.api_class.OcrMode abacusai.api_class.DocumentType abacusai.api_class.StdDevThresholdType abacusai.api_class.DataType abacusai.api_class.AgentInterface abacusai.api_class.WorkflowNodeTemplateType abacusai.api_class.ProjectConfigType abacusai.api_class.CPUSize abacusai.api_class.MemorySize abacusai.api_class.ResponseSectionType abacusai.api_class.CodeLanguage abacusai.api_class.DeploymentConversationType abacusai.api_class.AgentClientType abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.DocumentProcessingConfig abacusai.api_class.SamplingConfig abacusai.api_class.NSamplingConfig abacusai.api_class.PercentSamplingConfig abacusai.api_class._SamplingConfigFactory abacusai.api_class.MergeConfig abacusai.api_class.LastNMergeConfig abacusai.api_class.TimeWindowMergeConfig abacusai.api_class._MergeConfigFactory abacusai.api_class.OperatorConfig abacusai.api_class.UnpivotConfig abacusai.api_class.MarkdownConfig abacusai.api_class.CrawlerTransformConfig abacusai.api_class.ExtractDocumentDataConfig abacusai.api_class.DataGenerationConfig abacusai.api_class.UnionTransformConfig abacusai.api_class._OperatorConfigFactory abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.TrainingConfig abacusai.api_class.PersonalizationTrainingConfig abacusai.api_class.RegressionTrainingConfig abacusai.api_class.ForecastingTrainingConfig abacusai.api_class.NamedEntityExtractionTrainingConfig abacusai.api_class.NaturalLanguageSearchTrainingConfig abacusai.api_class.ChatLLMTrainingConfig abacusai.api_class.SentenceBoundaryDetectionTrainingConfig abacusai.api_class.SentimentDetectionTrainingConfig abacusai.api_class.DocumentClassificationTrainingConfig abacusai.api_class.DocumentSummarizationTrainingConfig abacusai.api_class.DocumentVisualizationTrainingConfig abacusai.api_class.ClusteringTrainingConfig abacusai.api_class.ClusteringTimeseriesTrainingConfig abacusai.api_class.EventAnomalyTrainingConfig abacusai.api_class.TimeseriesAnomalyTrainingConfig abacusai.api_class.CumulativeForecastingTrainingConfig abacusai.api_class.ThemeAnalysisTrainingConfig abacusai.api_class.AIAgentTrainingConfig abacusai.api_class.CustomTrainedModelTrainingConfig abacusai.api_class.CustomAlgorithmTrainingConfig abacusai.api_class.OptimizationTrainingConfig abacusai.api_class._TrainingConfigFactory abacusai.api_class.DeployableAlgorithm abacusai.api_class.ApiClass abacusai.api_class.StdDevThresholdType abacusai.api_class.TimeWindowConfig abacusai.api_class.ForecastingMonitorConfig abacusai.api_class.StdDevThreshold abacusai.api_class.ItemAttributesStdDevThreshold abacusai.api_class.RestrictFeatureMappings abacusai.api_class.MonitorFilteringConfig abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.AlertConditionConfig abacusai.api_class.AccuracyBelowThresholdConditionConfig abacusai.api_class.FeatureDriftConditionConfig abacusai.api_class.TargetDriftConditionConfig abacusai.api_class.HistoryLengthDriftConditionConfig abacusai.api_class.DataIntegrityViolationConditionConfig abacusai.api_class.BiasViolationConditionConfig abacusai.api_class.PredictionCountConditionConfig abacusai.api_class._AlertConditionConfigFactory abacusai.api_class.AlertActionConfig abacusai.api_class.EmailActionConfig abacusai.api_class._AlertActionConfigFactory abacusai.api_class.MonitorThresholdConfig abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.FeatureMappingConfig abacusai.api_class.ProjectFeatureGroupTypeMappingsConfig abacusai.api_class.ConstraintConfig abacusai.api_class.ProjectFeatureGroupConfig abacusai.api_class.ConstraintProjectFeatureGroupConfig abacusai.api_class.ReviewModeProjectFeatureGroupConfig abacusai.api_class._ProjectFeatureGroupConfigFactory abacusai.api_class.ApiClass abacusai.api_class.PythonFunctionArgument abacusai.api_class.OutputVariableMapping abacusai.api_class.ApiClass abacusai.api_class._ApiClassFactory abacusai.api_class.FeatureGroupExportConfig abacusai.api_class.FileConnectorExportConfig abacusai.api_class.DatabaseConnectorExportConfig abacusai.api_class._FeatureGroupExportConfigFactory abacusai.api_class.ApiClass abacusai.api_class.ResponseSection abacusai.api_class.AgentFlowButtonResponseSection abacusai.api_class.ImageUrlResponseSection abacusai.api_class.TextResponseSection abacusai.api_class.RuntimeSchemaResponseSection abacusai.api_class.CodeResponseSection abacusai.api_class.Base64ImageResponseSection abacusai.api_class.CollapseResponseSection abacusai.api_class.ListResponseSection abacusai.api_class.ChartResponseSection abacusai.api_class.DataframeResponseSection Functions --------- .. autoapisummary:: abacusai.api_class.get_clean_function_source_code_for_agent abacusai.api_class.validate_constructor_arg_types abacusai.api_class.validate_input_dict_param abacusai.api_class.deprecated_enums Package Contents ---------------- .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:function:: get_clean_function_source_code_for_agent(func) .. py:function:: validate_constructor_arg_types(friendly_class_name=None) .. py:function:: validate_input_dict_param(dict_object, friendly_class_name, must_contain=[]) .. py:class:: FieldDescriptor Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Configs for vector store indexing. :param field: The field to be extracted. This will be used as the key in the response. :type field: str :param description: The description of this field. If not included, the response_field will be used. :type description: str :param example_extraction: An example of this extracted field. :type example_extraction: Union[str, int, bool, float] :param type: The type of this field. If not provided, the default type is STRING. :type type: FieldDescriptorType .. py:attribute:: field :type: str .. py:attribute:: description :type: str :value: None .. py:attribute:: example_extraction :type: Union[str, int, bool, float, list, dict] :value: None .. py:attribute:: type :type: abacusai.api_class.enums.FieldDescriptorType .. py:class:: JSONSchema .. py:method:: from_fields_list(fields_list) :classmethod: .. py:method:: to_fields_list(json_schema) :classmethod: .. py:class:: WorkflowNodeInputMapping Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents a mapping of inputs to a workflow node. :param name: The name of the input variable of the node function. :type name: str :param variable_type: The type of the input. If the type is `IGNORE`, the input will be ignored. :type variable_type: Union[WorkflowNodeInputType, str] :param variable_source: The name of the node this variable is sourced from. If the type is `WORKFLOW_VARIABLE`, the value given by the source node will be directly used. If the type is `USER_INPUT`, the value given by the source node will be used as the default initial value before the user edits it. Set to `None` if the type is `USER_INPUT` and the variable doesn't need a pre-filled initial value. :type variable_source: str :param is_required: Indicates whether the input is required. Defaults to True. :type is_required: bool :param description: The description of this input. :type description: str :param constant_value: The constant value of this input if variable type is CONSTANT. Only applicable for template nodes. :type constant_value: str .. py:attribute:: name :type: str .. py:attribute:: variable_type :type: abacusai.api_class.enums.WorkflowNodeInputType .. py:attribute:: variable_source :type: str :value: None .. py:attribute:: source_prop :type: str :value: None .. py:attribute:: is_required :type: bool :value: True .. py:attribute:: description :type: str :value: None .. py:attribute:: constant_value :type: str :value: None .. py:method:: __post_init__() .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(mapping) :classmethod: .. py:class:: WorkflowNodeInputSchema Bases: :py:obj:`abacusai.api_class.abstract.ApiClass`, :py:obj:`JSONSchema` A schema conformant to react-jsonschema-form for workflow node input. To initialize a WorkflowNodeInputSchema dependent on another node's output, use the from_workflow_node method. :param json_schema: The JSON schema for the input, conformant to react-jsonschema-form specification. Must define keys like "title", "type", and "properties". Supported elements include Checkbox, Radio Button, Dropdown, Textarea, Number, Date, and file upload. Nested elements, arrays, and other complex types are not supported. :type json_schema: dict :param ui_schema: The UI schema for the input, conformant to react-jsonschema-form specification. :type ui_schema: dict .. py:attribute:: json_schema :type: dict .. py:attribute:: ui_schema :type: dict .. py:attribute:: schema_source :type: str :value: None .. py:attribute:: schema_prop :type: str :value: None .. py:attribute:: runtime_schema :type: bool :value: False .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(schema) :classmethod: .. py:method:: from_workflow_node(schema_source, schema_prop) :classmethod: Creates a WorkflowNodeInputSchema instance which references the schema generated by a WorkflowGraphNode. :param schema_source: The name of the source WorkflowGraphNode. :type schema_source: str :param schema_prop: The name of the input schema parameter which source node outputs. :type schema_prop: str .. py:method:: from_input_mappings(input_mappings) :classmethod: Creates a json_schema for the input schema of the node from it's input mappings. :param input_mappings: The input mappings for the node. :type input_mappings: List[WorkflowNodeInputMapping] .. py:class:: WorkflowNodeOutputMapping Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents a mapping of output from a workflow node. :param name: The name of the output. :type name: str :param variable_type: The type of the output in the form of an enum or a string. :type variable_type: Union[WorkflowNodeOutputType, str] :param description: The description of this output. :type description: str .. py:attribute:: name :type: str .. py:attribute:: variable_type :type: Union[abacusai.api_class.enums.WorkflowNodeOutputType, str] .. py:attribute:: description :type: str :value: None .. py:method:: __post_init__() .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(mapping) :classmethod: .. py:class:: WorkflowNodeOutputSchema Bases: :py:obj:`abacusai.api_class.abstract.ApiClass`, :py:obj:`JSONSchema` A schema conformant to react-jsonschema-form for a workflow node output. :param json_schema: The JSON schema for the output, conformant to react-jsonschema-form specification. :type json_schema: dict .. py:attribute:: json_schema :type: dict .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(schema) :classmethod: .. py:class:: TriggerConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents the configuration for a trigger workflow node. :param sleep_time: The time in seconds to wait before the node gets executed again. :type sleep_time: int .. py:attribute:: sleep_time :type: int :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(configs) :classmethod: .. py:class:: WorkflowGraphNode(name, function = None, input_mappings = None, output_mappings = None, function_name = None, source_code = None, input_schema = None, output_schema = None, template_metadata = None, trigger_config = None) Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents a node in an Agent workflow graph. :param name: A unique name for the workflow node. :type name: str :param input_mappings: List of input mappings for the node. Each arg/kwarg of the node function should have a corresponding input mapping. :type input_mappings: List[WorkflowNodeInputMapping] :param output_mappings: List of outputs for the node. Each field in the returned dict/AgentResponse must have a corresponding output in the list. :type output_mappings: List[str] :param function: The callable node function reference. :type function: callable :param input_schema: The react json schema for the user input variables. :type input_schema: WorkflowNodeInputSchema :param output_schema: The list of outputs to be shown on UI. Each output corresponds to a field in the output mappings of the node. :type output_schema: List[str] Additional Attributes: function_name (str): The name of the function. source_code (str): The source code of the function. trigger_config (TriggerConfig): The configuration for a trigger workflow node. .. py:attribute:: template_metadata :value: None .. py:attribute:: trigger_config :value: None .. py:method:: _raw_init(name, input_mappings = None, output_mappings = None, function = None, function_name = None, source_code = None, input_schema = None, output_schema = None, template_metadata = None, trigger_config = None) :classmethod: .. py:method:: from_template(template_name, name, configs = None, input_mappings = None, input_schema = None, output_schema = None, sleep_time = None) :classmethod: .. py:method:: from_tool(tool_name, name, configs = None, input_mappings = None, input_schema = None, output_schema = None) :classmethod: .. py:method:: from_system_tool(tool_name, name, configs = None, input_mappings = None, input_schema = None, output_schema = None) :classmethod: .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: is_template_node() .. py:method:: is_trigger_node() .. py:method:: from_dict(node) :classmethod: .. py:method:: __setattr__(name, value) .. py:method:: __getattribute__(name) .. py:class:: Outputs(node) .. py:attribute:: node .. py:method:: __getattr__(name) .. py:property:: outputs .. py:class:: WorkflowGraphEdge(source, target, details = None) Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents an edge in an Agent workflow graph. To make an edge conditional, provide {'EXECUTION_CONDITION': ''} key-value in the details dictionary. The condition should be a Pythonic expression string that evaluates to a boolean value and only depends on the outputs of the source node of the edge. :param source: The name of the source node of the edge. :type source: str :param target: The name of the target node of the edge. :type target: str :param details: Additional details about the edge. Like the condition for edge execution. :type details: dict .. py:attribute:: source :type: Union[str, WorkflowGraphNode] .. py:attribute:: target :type: Union[str, WorkflowGraphNode] .. py:attribute:: details :type: dict .. py:method:: to_nx_edge() .. py:class:: WorkflowGraph Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents an Agent workflow graph. :param nodes: A list of nodes in the workflow graph. :type nodes: List[WorkflowGraphNode] :param primary_start_node: The primary node to start the workflow from. :type primary_start_node: Union[str, WorkflowGraphNode] :param common_source_code: Common source code that can be used across all nodes. :type common_source_code: str .. py:attribute:: nodes :type: List[WorkflowGraphNode] :value: [] .. py:attribute:: edges :type: List[Union[WorkflowGraphEdge, Tuple[WorkflowGraphNode, WorkflowGraphNode, dict], Tuple[str, str, dict]]] :value: [] .. py:attribute:: primary_start_node :type: Union[str, WorkflowGraphNode] :value: None .. py:attribute:: common_source_code :type: str :value: None .. py:attribute:: specification_type :type: str :value: 'data_flow' .. py:method:: __post_init__() .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(graph) :classmethod: .. py:class:: AgentConversationMessage Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Message format for agent conversation :param is_user: Whether the message is from the user. :type is_user: bool :param text: The message's text. :type text: str :param document_contents: Dict of document name to document text in case of any document present. :type document_contents: dict .. py:attribute:: is_user :type: bool :value: None .. py:attribute:: text :type: str :value: None .. py:attribute:: document_contents :type: dict :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: WorkflowNodeTemplateConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents a WorkflowNode template config. :param name: A unique name of the config. :type name: str :param description: The description of this config. :type description: str :param default_value: Default value of the config to be used if value is not provided during node initialization. :type default_value: str :param is_required: Indicates whether the config is required. Defaults to False. :type is_required: bool .. py:attribute:: name :type: str .. py:attribute:: description :type: str :value: None .. py:attribute:: default_value :type: str :value: None .. py:attribute:: is_required :type: bool :value: False .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(mapping) :classmethod: .. py:class:: WorkflowNodeTemplateInput Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents an input to the workflow node generated using template. :param name: A unique name of the input. :type name: str :param is_required: Indicates whether the input is required. Defaults to False. :type is_required: bool :param description: The description of this input. :type description: str .. py:attribute:: name :type: str .. py:attribute:: is_required :type: bool :value: False .. py:attribute:: description :type: str :value: '' .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(mapping) :classmethod: .. py:class:: WorkflowNodeTemplateOutput Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Represents an output returned by the workflow node generated using template. :param name: The name of the output. :type name: str :param variable_type: The type of the output. :type variable_type: WorkflowNodeOutputType :param description: The description of this output. :type description: str .. py:attribute:: name :type: str .. py:attribute:: variable_type :type: abacusai.api_class.enums.WorkflowNodeOutputType .. py:attribute:: description :type: str :value: '' .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(mapping) :classmethod: .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: HotkeyPrompt Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` A config class for a Data Science Co-Pilot Hotkey :param prompt: The prompt to send to Data Science Co-Pilot :type prompt: str :param title: A short, descriptive title for the prompt. If not provided, one will be automatically generated. :type title: str .. py:attribute:: prompt :type: str .. py:attribute:: title :type: str :value: None .. py:attribute:: disable_problem_type_context :type: bool :value: True .. py:attribute:: ignore_history :type: bool :value: None .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: BatchPredictionArgs Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for Batch Prediction args specific to problem type. .. py:attribute:: _support_kwargs :type: bool :value: True .. py:attribute:: kwargs :type: dict .. py:attribute:: problem_type :type: abacusai.api_class.enums.ProblemType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: ForecastingBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the FORECASTING problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation :type for_eval: bool :param predictions_start_date: The start date for predictions. Accepts timestamp integers and strings in many standard formats such as YYYY-MM-DD, YYYY-MM-DD HH:MM:SS, or YYYY-MM-DDTHH:MM:SS. If not specified, the prediction start date will be automatically defined. :type predictions_start_date: str :param use_prediction_offset: If True, use prediction offset. :type use_prediction_offset: bool :param start_date_offset: Sets prediction start date as this offset relative to the prediction start date. :type start_date_offset: int :param forecasting_horizon: The number of timestamps to predict in the future. Range: [1, 1000]. :type forecasting_horizon: int :param item_attributes_to_include_in_the_result: List of columns to include in the prediction output. :type item_attributes_to_include_in_the_result: list :param explain_predictions: If True, calculates explanations for the forecasted values along with predictions. :type explain_predictions: bool :param create_monitor: Controls whether to automatically create a monitor to calculate the drift each time the batch prediction is run. Defaults to true if not specified. :type create_monitor: bool .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: predictions_start_date :type: str :value: None .. py:attribute:: use_prediction_offset :type: bool :value: None .. py:attribute:: start_date_offset :type: int :value: None .. py:attribute:: forecasting_horizon :type: int :value: None .. py:attribute:: item_attributes_to_include_in_the_result :type: list :value: None .. py:attribute:: explain_predictions :type: bool :value: None .. py:attribute:: create_monitor :type: bool :value: None .. py:method:: __post_init__() .. py:class:: NamedEntityExtractionBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the NAMED_ENTITY_EXTRACTION problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool .. py:attribute:: for_eval :type: bool :value: None .. py:method:: __post_init__() .. py:class:: PersonalizationBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the PERSONALIZATION problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool :param number_of_items: Number of items to recommend. :type number_of_items: int :param item_attributes_to_include_in_the_result: List of columns to include in the prediction output. :type item_attributes_to_include_in_the_result: list :param score_field: If specified, relative item scores will be returned using a field with this name :type score_field: str .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: number_of_items :type: int :value: None .. py:attribute:: item_attributes_to_include_in_the_result :type: list :value: None .. py:attribute:: score_field :type: str :value: None .. py:method:: __post_init__() .. py:class:: PredictiveModelingBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the PREDICTIVE_MODELING problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool :param explainer_type: The type of explainer to use to generate explanations on the batch prediction. :type explainer_type: enums.ExplainerType :param number_of_samples_to_use_for_explainer: Number Of Samples To Use For Kernel Explainer. :type number_of_samples_to_use_for_explainer: int :param include_multi_class_explanations: If True, Includes explanations for all classes in multi-class classification. :type include_multi_class_explanations: bool :param features_considered_constant_for_explanations: Comma separate list of fields to treat as constant in SHAP explanations. :type features_considered_constant_for_explanations: str :param importance_of_records_in_nested_columns: Returns importance of each index in the specified nested column instead of SHAP column explanations. :type importance_of_records_in_nested_columns: str :param explanation_filter_lower_bound: If set explanations will be limited to predictions above this value, Range: [0, 1]. :type explanation_filter_lower_bound: float :param explanation_filter_upper_bound: If set explanations will be limited to predictions below this value, Range: [0, 1]. :type explanation_filter_upper_bound: float :param explanation_filter_label: For classification problems specifies the label to which the explanation bounds are applied. :type explanation_filter_label: str :param output_columns: A list of column names to include in the prediction result. :type output_columns: list :param explain_predictions: If True, calculates explanations for the predicted values along with predictions. :type explain_predictions: bool :param create_monitor: Controls whether to automatically create a monitor to calculate the drift each time the batch prediction is run. Defaults to true if not specified. :type create_monitor: bool .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: explainer_type :type: abacusai.api_class.enums.ExplainerType :value: None .. py:attribute:: number_of_samples_to_use_for_explainer :type: int :value: None .. py:attribute:: include_multi_class_explanations :type: bool :value: None .. py:attribute:: features_considered_constant_for_explanations :type: str :value: None .. py:attribute:: importance_of_records_in_nested_columns :type: str :value: None .. py:attribute:: explanation_filter_lower_bound :type: float :value: None .. py:attribute:: explanation_filter_upper_bound :type: float :value: None .. py:attribute:: explanation_filter_label :type: str :value: None .. py:attribute:: output_columns :type: list :value: None .. py:attribute:: explain_predictions :type: bool :value: None .. py:attribute:: create_monitor :type: bool :value: None .. py:method:: __post_init__() .. py:class:: PretrainedModelsBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the PRETRAINED_MODELS problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool :param files_output_location_prefix: The output location prefix for the files. :type files_output_location_prefix: str :param channel_id_to_label_map: JSON string for the map from channel ids to their labels. :type channel_id_to_label_map: str .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: files_output_location_prefix :type: str :value: None .. py:attribute:: channel_id_to_label_map :type: str :value: None .. py:method:: __post_init__() .. py:class:: SentenceBoundaryDetectionBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the SENTENCE_BOUNDARY_DETECTION problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation :type for_eval: bool :param explode_output: Explode data so there is one sentence per row. :type explode_output: bool .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: explode_output :type: bool :value: None .. py:method:: __post_init__() .. py:class:: ThemeAnalysisBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the THEME_ANALYSIS problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool :param analysis_frequency: The length of each analysis interval. :type analysis_frequency: str :param start_date: The end point for predictions. :type start_date: str :param analysis_days: How many days to analyze. :type analysis_days: int .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: analysis_frequency :type: str :value: None .. py:attribute:: start_date :type: str :value: None .. py:attribute:: analysis_days :type: int :value: None .. py:method:: __post_init__() .. py:class:: ChatLLMBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the ChatLLM problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool .. py:attribute:: for_eval :type: bool :value: None .. py:method:: __post_init__() .. py:class:: TrainablePlugAndPlayBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the TrainablePlugAndPlay problem type :param for_eval: If True, the test fold which was created during training and used for metrics calculation will be used as input data. These predictions are hence, used for model evaluation. :type for_eval: bool :param create_monitor: Controls whether to automatically create a monitor to calculate the drift each time the batch prediction is run. Defaults to true if not specified. :type create_monitor: bool .. py:attribute:: for_eval :type: bool :value: None .. py:attribute:: create_monitor :type: bool :value: None .. py:method:: __post_init__() .. py:class:: AIAgentBatchPredictionArgs Bases: :py:obj:`BatchPredictionArgs` Batch Prediction Config for the AIAgents problem type .. py:method:: __post_init__() .. py:class:: _BatchPredictionArgsFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'problem_type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: Blob(contents, mime_type = None, filename = None, size = None) Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An object for storing and passing file data. In AI Agents, if a function accepts file upload as an argument, the uploaded file is passed as a Blob object. If a function returns a Blob object, it will be rendered as a file download. :param contents: The binary contents of the blob. :type contents: bytes :param mime_type: The mime type of the blob. :type mime_type: str :param filename: The original filename of the blob. :type filename: str :param size: The size of the blob in bytes. :type size: int .. py:attribute:: filename :type: str .. py:attribute:: contents :type: bytes .. py:attribute:: mime_type :type: str .. py:attribute:: size :type: int .. py:method:: from_local_file(file_path) :classmethod: .. py:method:: from_contents(contents, filename = None, mime_type = None) :classmethod: .. py:class:: BlobInput(filename = None, contents = None, mime_type = None, size = None) Bases: :py:obj:`Blob` An object for storing and passing file data. In AI Agents, if a function accepts file upload as an argument, the uploaded file is passed as a BlobInput object. :param filename: The original filename of the blob. :type filename: str :param contents: The binary contents of the blob. :type contents: bytes :param mime_type: The mime type of the blob. :type mime_type: str :param size: The size of the blob in bytes. :type size: int .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: DatasetConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for dataset configs :param is_documentset: Whether the dataset is a document set :type is_documentset: bool .. py:attribute:: is_documentset :type: bool :value: None .. py:class:: StreamingConnectorDatasetConfig Bases: :py:obj:`abacusai.api_class.dataset.DatasetConfig` An abstract class for dataset configs specific to streaming connectors. :param streaming_connector_type: The type of streaming connector :type streaming_connector_type: StreamingConnectorType .. py:attribute:: streaming_connector_type :type: abacusai.api_class.enums.StreamingConnectorType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: KafkaDatasetConfig Bases: :py:obj:`StreamingConnectorDatasetConfig` Dataset config for Kafka Streaming Connector :param topic: The kafka topic to consume :type topic: str .. py:attribute:: topic :type: str :value: None .. py:method:: __post_init__() .. py:class:: _StreamingConnectorDatasetConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'streaming_connector_type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: DocumentType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: SIMPLE_TEXT :value: 'SIMPLE_TEXT' .. py:attribute:: TEXT :value: 'TEXT' .. py:attribute:: TABLES_AND_FORMS :value: 'TABLES_AND_FORMS' .. py:attribute:: EMBEDDED_IMAGES :value: 'EMBEDDED_IMAGES' .. py:attribute:: SCANNED_TEXT :value: 'SCANNED_TEXT' .. py:attribute:: COMPREHENSIVE_MARKDOWN :value: 'COMPREHENSIVE_MARKDOWN' .. py:method:: is_ocr_forced(document_type) :classmethod: .. py:class:: OcrMode Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUTO :value: 'AUTO' .. py:attribute:: DEFAULT :value: 'DEFAULT' .. py:attribute:: LAYOUT :value: 'LAYOUT' .. py:attribute:: SCANNED :value: 'SCANNED' .. py:attribute:: COMPREHENSIVE :value: 'COMPREHENSIVE' .. py:attribute:: COMPREHENSIVE_V2 :value: 'COMPREHENSIVE_V2' .. py:attribute:: COMPREHENSIVE_TABLE_MD :value: 'COMPREHENSIVE_TABLE_MD' .. py:attribute:: COMPREHENSIVE_FORM_MD :value: 'COMPREHENSIVE_FORM_MD' .. py:attribute:: COMPREHENSIVE_FORM_AND_TABLE_MD :value: 'COMPREHENSIVE_FORM_AND_TABLE_MD' .. py:attribute:: TESSERACT_FAST :value: 'TESSERACT_FAST' .. py:attribute:: LLM :value: 'LLM' .. py:attribute:: AUGMENTED_LLM :value: 'AUGMENTED_LLM' .. py:method:: aws_ocr_modes() :classmethod: .. py:class:: DatasetConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for dataset configs :param is_documentset: Whether the dataset is a document set :type is_documentset: bool .. py:attribute:: is_documentset :type: bool :value: None .. py:class:: ParsingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Custom config for dataset parsing. :param escape: Escape character for CSV files. Defaults to '"'. :type escape: str :param csv_delimiter: Delimiter for CSV files. Defaults to None. :type csv_delimiter: str :param file_path_with_schema: Path to the file with schema. Defaults to None. :type file_path_with_schema: str .. py:attribute:: escape :type: str :value: '"' .. py:attribute:: csv_delimiter :type: str :value: None .. py:attribute:: file_path_with_schema :type: str :value: None .. py:class:: DocumentProcessingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Document processing configuration. :param document_type: Type of document. Can be one of Text, Tables and Forms, Embedded Images, etc. If not specified, type will be decided automatically. :type document_type: DocumentType :param highlight_relevant_text: Whether to extract bounding boxes and highlight relevant text in search results. Defaults to False. :type highlight_relevant_text: bool :param extract_bounding_boxes: Whether to perform OCR and extract bounding boxes. If False, no OCR will be done but only the embedded text from digital documents will be extracted. Defaults to False. :type extract_bounding_boxes: bool :param ocr_mode: OCR mode. There are different OCR modes available for different kinds of documents and use cases. This option only takes effect when extract_bounding_boxes is True. :type ocr_mode: OcrMode :param use_full_ocr: Whether to perform full OCR. If True, OCR will be performed on the full page. If False, OCR will be performed on the non-text regions only. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type use_full_ocr: bool :param remove_header_footer: Whether to remove headers and footers. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type remove_header_footer: bool :param remove_watermarks: Whether to remove watermarks. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type remove_watermarks: bool :param convert_to_markdown: Whether to convert extracted text to markdown. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type convert_to_markdown: bool :param mask_pii: Whether to mask personally identifiable information (PII) in the document text/tokens. Defaults to False. :type mask_pii: bool :param extract_images: Whether to extract images from the document e.g. diagrams in a PDF page. Defaults to False. :type extract_images: bool .. py:attribute:: document_type :type: abacusai.api_class.enums.DocumentType :value: None .. py:attribute:: highlight_relevant_text :type: bool :value: None .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:attribute:: ocr_mode :type: abacusai.api_class.enums.OcrMode .. py:attribute:: use_full_ocr :type: bool :value: None .. py:attribute:: remove_header_footer :type: bool :value: False .. py:attribute:: remove_watermarks :type: bool :value: True .. py:attribute:: convert_to_markdown :type: bool :value: False .. py:attribute:: mask_pii :type: bool :value: False .. py:attribute:: extract_images :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _detect_ocr_mode() .. py:method:: _get_filtered_dict(config) :classmethod: Filters out default values from the config .. py:class:: DatasetDocumentProcessingConfig Bases: :py:obj:`DocumentProcessingConfig` Document processing configuration for dataset imports. :param extract_bounding_boxes: Whether to perform OCR and extract bounding boxes. If False, no OCR will be done but only the embedded text from digital documents will be extracted. Defaults to False. :type extract_bounding_boxes: bool :param ocr_mode: OCR mode. There are different OCR modes available for different kinds of documents and use cases. This option only takes effect when extract_bounding_boxes is True. :type ocr_mode: OcrMode :param use_full_ocr: Whether to perform full OCR. If True, OCR will be performed on the full page. If False, OCR will be performed on the non-text regions only. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type use_full_ocr: bool :param remove_header_footer: Whether to remove headers and footers. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type remove_header_footer: bool :param remove_watermarks: Whether to remove watermarks. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type remove_watermarks: bool :param convert_to_markdown: Whether to convert extracted text to markdown. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type convert_to_markdown: bool :param page_text_column: Name of the output column which contains the extracted text for each page. If not provided, no column will be created. :type page_text_column: str .. py:attribute:: page_text_column :type: str :value: None .. py:class:: IncrementalDatabaseConnectorConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Config information for incremental datasets from database connectors :param timestamp_column: If dataset is incremental, this is the column name of the required column in the dataset. This column must contain timestamps in descending order which are used to determine the increments of the incremental dataset. :type timestamp_column: str .. py:attribute:: timestamp_column :type: str :value: None .. py:class:: AttachmentParsingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Config information for parsing attachments :param feature_group_name: feature group name :type feature_group_name: str :param column_name: column name :type column_name: str :param urls: list of urls :type urls: str .. py:attribute:: feature_group_name :type: str :value: None .. py:attribute:: column_name :type: str :value: None .. py:attribute:: urls :type: str :value: None .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: DatasetConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for dataset configs :param is_documentset: Whether the dataset is a document set :type is_documentset: bool .. py:attribute:: is_documentset :type: bool :value: None .. py:class:: DatasetDocumentProcessingConfig Bases: :py:obj:`DocumentProcessingConfig` Document processing configuration for dataset imports. :param extract_bounding_boxes: Whether to perform OCR and extract bounding boxes. If False, no OCR will be done but only the embedded text from digital documents will be extracted. Defaults to False. :type extract_bounding_boxes: bool :param ocr_mode: OCR mode. There are different OCR modes available for different kinds of documents and use cases. This option only takes effect when extract_bounding_boxes is True. :type ocr_mode: OcrMode :param use_full_ocr: Whether to perform full OCR. If True, OCR will be performed on the full page. If False, OCR will be performed on the non-text regions only. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type use_full_ocr: bool :param remove_header_footer: Whether to remove headers and footers. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type remove_header_footer: bool :param remove_watermarks: Whether to remove watermarks. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type remove_watermarks: bool :param convert_to_markdown: Whether to convert extracted text to markdown. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type convert_to_markdown: bool :param page_text_column: Name of the output column which contains the extracted text for each page. If not provided, no column will be created. :type page_text_column: str .. py:attribute:: page_text_column :type: str :value: None .. py:class:: ApplicationConnectorDatasetConfig Bases: :py:obj:`abacusai.api_class.dataset.DatasetConfig` An abstract class for dataset configs specific to application connectors. :param application_connector_type: The type of application connector :type application_connector_type: enums.ApplicationConnectorType :param application_connector_id: The ID of the application connector :type application_connector_id: str :param document_processing_config: The document processing configuration. Only valid if is_documentset is True for the dataset. :type document_processing_config: DatasetDocumentProcessingConfig .. py:attribute:: application_connector_type :type: abacusai.api_class.enums.ApplicationConnectorType :value: None .. py:attribute:: application_connector_id :type: str :value: None .. py:attribute:: document_processing_config :type: abacusai.api_class.dataset.DatasetDocumentProcessingConfig :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: ConfluenceDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Confluence Application Connector :param location: The location of the pages to fetch :type location: str :param space_key: The space key of the space from which we fetch pages :type space_key: str :param pull_attachments: Whether to pull attachments for each page :type pull_attachments: bool :param extract_bounding_boxes: Whether to extract bounding boxes from the documents :type extract_bounding_boxes: bool .. py:attribute:: location :type: str :value: None .. py:attribute:: space_key :type: str :value: None .. py:attribute:: pull_attachments :type: bool :value: False .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:method:: __post_init__() .. py:class:: BoxDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Box Application Connector :param location: The regex location of the files to fetch :type location: str :param csv_delimiter: If the file format is CSV, use a specific csv delimiter :type csv_delimiter: str :param merge_file_schemas: Signifies if the merge file schema policy is enabled. Not applicable if is_documentset is True :type merge_file_schemas: bool .. py:attribute:: location :type: str :value: None .. py:attribute:: csv_delimiter :type: str :value: None .. py:attribute:: merge_file_schemas :type: bool :value: False .. py:method:: __post_init__() .. py:class:: GoogleAnalyticsDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Google Analytics Application Connector :param location: The view id of the report in the connector to fetch :type location: str :param start_timestamp: Unix timestamp of the start of the period that will be queried :type start_timestamp: int :param end_timestamp: Unix timestamp of the end of the period that will be queried :type end_timestamp: int .. py:attribute:: location :type: str :value: None .. py:attribute:: start_timestamp :type: int :value: None .. py:attribute:: end_timestamp :type: int :value: None .. py:method:: __post_init__() .. py:class:: GoogleDriveDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Google Drive Application Connector :param location: The regex location of the files to fetch :type location: str :param csv_delimiter: If the file format is CSV, use a specific csv delimiter :type csv_delimiter: str :param extract_bounding_boxes: Signifies whether to extract bounding boxes out of the documents. Only valid if is_documentset if True :type extract_bounding_boxes: bool :param merge_file_schemas: Signifies if the merge file schema policy is enabled. Not applicable if is_documentset is True :type merge_file_schemas: bool .. py:attribute:: location :type: str :value: None .. py:attribute:: csv_delimiter :type: str :value: None .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:attribute:: merge_file_schemas :type: bool :value: False .. py:method:: __post_init__() .. py:class:: JiraDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Jira Application Connector :param jql: The JQL query for fetching issues :type jql: str .. py:attribute:: jql :type: str :value: None .. py:method:: __post_init__() .. py:class:: OneDriveDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for OneDrive Application Connector :param location: The regex location of the files to fetch :type location: str :param csv_delimiter: If the file format is CSV, use a specific csv delimiter :type csv_delimiter: str :param extract_bounding_boxes: Signifies whether to extract bounding boxes out of the documents. Only valid if is_documentset if True :type extract_bounding_boxes: bool :param merge_file_schemas: Signifies if the merge file schema policy is enabled. Not applicable if is_documentset is True :type merge_file_schemas: bool .. py:attribute:: location :type: str :value: None .. py:attribute:: csv_delimiter :type: str :value: None .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:attribute:: merge_file_schemas :type: bool :value: False .. py:method:: __post_init__() .. py:class:: SharepointDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Sharepoint Application Connector :param location: The regex location of the files to fetch :type location: str :param csv_delimiter: If the file format is CSV, use a specific csv delimiter :type csv_delimiter: str :param extract_bounding_boxes: Signifies whether to extract bounding boxes out of the documents. Only valid if is_documentset if True :type extract_bounding_boxes: bool :param merge_file_schemas: Signifies if the merge file schema policy is enabled. Not applicable if is_documentset is True :type merge_file_schemas: bool .. py:attribute:: location :type: str :value: None .. py:attribute:: csv_delimiter :type: str :value: None .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:attribute:: merge_file_schemas :type: bool :value: False .. py:method:: __post_init__() .. py:class:: ZendeskDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Zendesk Application Connector :param location: The regex location of the files to fetch :type location: str .. py:attribute:: location :type: str :value: None .. py:method:: __post_init__() .. py:class:: AbacusUsageMetricsDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Abacus Usage Metrics Application Connector :param include_entire_conversation_history: Whether to show the entire history for this deployment conversation :type include_entire_conversation_history: bool :param include_all_feedback: Whether to include all feedback for this deployment conversation :type include_all_feedback: bool :param resolve_matching_documents: Whether to get matching document references for response instead of prompt. Needs to recalculate them if highlights are unavailable in summary_info :type resolve_matching_documents: bool .. py:attribute:: include_entire_conversation_history :type: bool :value: False .. py:attribute:: include_all_feedback :type: bool :value: False .. py:attribute:: resolve_matching_documents :type: bool :value: False .. py:method:: __post_init__() .. py:class:: TeamsScraperDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Teams Scraper Application Connector :param pull_chat_messages: Whether to pull teams chat messages :type pull_chat_messages: bool :param pull_channel_posts: Whether to pull posts for each channel :type pull_channel_posts: bool :param pull_transcripts: Whether to pull transcripts for calendar meetings :type pull_transcripts: bool .. py:attribute:: pull_chat_messages :type: bool :value: False .. py:attribute:: pull_channel_posts :type: bool :value: False .. py:attribute:: pull_transcripts :type: bool :value: False .. py:method:: __post_init__() .. py:class:: FreshserviceDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for Freshservice Application Connector .. py:method:: __post_init__() .. py:class:: SftpDatasetConfig Bases: :py:obj:`ApplicationConnectorDatasetConfig` Dataset config for SFTP Application Connector :param location: The regex location of the files to fetch :type location: str :param csv_delimiter: If the file format is CSV, use a specific csv delimiter :type csv_delimiter: str :param extract_bounding_boxes: Signifies whether to extract bounding boxes out of the documents. Only valid if is_documentset if True :type extract_bounding_boxes: bool :param merge_file_schemas: Signifies if the merge file schema policy is enabled. Not applicable if is_documentset is True :type merge_file_schemas: bool .. py:attribute:: location :type: str :value: None .. py:attribute:: csv_delimiter :type: str :value: None .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:attribute:: merge_file_schemas :type: bool :value: False .. py:method:: __post_init__() .. py:class:: _ApplicationConnectorDatasetConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'application_connector_type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: PredictionArguments Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for prediction arguments specific to problem type. .. py:attribute:: _support_kwargs :type: bool :value: True .. py:attribute:: kwargs :type: dict .. py:attribute:: problem_type :type: abacusai.api_class.enums.ProblemType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: OptimizationPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the OPTIMIZATION problem type :param forced_assignments: Set of assignments to force and resolve before returning query results. :type forced_assignments: dict :param solve_time_limit_seconds: Maximum time in seconds to spend solving the query. :type solve_time_limit_seconds: float :param include_all_assignments: If True, will return all assignments, including assignments with value 0. Default is False. :type include_all_assignments: bool .. py:attribute:: forced_assignments :type: dict :value: None .. py:attribute:: solve_time_limit_seconds :type: float :value: None .. py:attribute:: include_all_assignments :type: bool :value: None .. py:method:: __post_init__() .. py:class:: TimeseriesAnomalyPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the TS_ANOMALY problem type :param start_timestamp: Timestamp from which anomalies have to be detected in the training data :type start_timestamp: str :param end_timestamp: Timestamp to which anomalies have to be detected in the training data :type end_timestamp: str :param get_all_item_data: If True, anomaly detection has to be performed on all the data related to input ids :type get_all_item_data: bool .. py:attribute:: start_timestamp :type: str :value: None .. py:attribute:: end_timestamp :type: str :value: None .. py:attribute:: get_all_item_data :type: bool :value: None .. py:method:: __post_init__() .. py:class:: ChatLLMPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the CHAT_LLM problem type :param llm_name: Name of the specific LLM backend to use to power the chat experience. :type llm_name: str :param num_completion_tokens: Default for maximum number of tokens for chat answers. :type num_completion_tokens: int :param system_message: The generative LLM system message. :type system_message: str :param temperature: The generative LLM temperature. :type temperature: float :param search_score_cutoff: Cutoff for the document retriever score. Matching search results below this score will be ignored. :type search_score_cutoff: float :param ignore_documents: If True, will ignore any documents and search results, and only use the messages to generate a response. :type ignore_documents: bool .. py:attribute:: llm_name :type: str :value: None .. py:attribute:: num_completion_tokens :type: int :value: None .. py:attribute:: system_message :type: str :value: None .. py:attribute:: temperature :type: float :value: None .. py:attribute:: search_score_cutoff :type: float :value: None .. py:attribute:: ignore_documents :type: bool :value: None .. py:method:: __post_init__() .. py:class:: RegressionPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the PREDICTIVE_MODELING problem type :param explain_predictions: If true, will explain predictions. :type explain_predictions: bool :param explainer_type: Type of explainer to use for explanations. :type explainer_type: str .. py:attribute:: explain_predictions :type: bool :value: None .. py:attribute:: explainer_type :type: str :value: None .. py:method:: __post_init__() .. py:class:: ForecastingPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the FORECASTING problem type :param num_predictions: The number of timestamps to predict in the future. :type num_predictions: int :param prediction_start: The start date for predictions (e.g., "2015-08-01T00:00:00" as input for mid-night of 2015-08-01). :type prediction_start: str :param explain_predictions: If True, explain predictions for forecasting. :type explain_predictions: bool :param explainer_type: Type of explainer to use for explanations. :type explainer_type: str :param get_item_data: If True, will return the data corresponding to items as well. :type get_item_data: bool .. py:attribute:: num_predictions :type: int :value: None .. py:attribute:: prediction_start :type: str :value: None .. py:attribute:: explain_predictions :type: bool :value: None .. py:attribute:: explainer_type :type: str :value: None .. py:attribute:: get_item_data :type: bool :value: None .. py:method:: __post_init__() .. py:class:: CumulativeForecastingPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the CUMULATIVE_FORECASTING problem type :param num_predictions: The number of timestamps to predict in the future. :type num_predictions: int :param prediction_start: The start date for predictions (e.g., "2015-08-01T00:00:00" as input for mid-night of 2015-08-01). :type prediction_start: str :param explain_predictions: If True, explain predictions for forecasting. :type explain_predictions: bool :param explainer_type: Type of explainer to use for explanations. :type explainer_type: str :param get_item_data: If True, will return the data corresponding to items as well. :type get_item_data: bool .. py:attribute:: num_predictions :type: int :value: None .. py:attribute:: prediction_start :type: str :value: None .. py:attribute:: explain_predictions :type: bool :value: None .. py:attribute:: explainer_type :type: str :value: None .. py:attribute:: get_item_data :type: bool :value: None .. py:method:: __post_init__() .. py:class:: NaturalLanguageSearchPredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the NATURAL_LANGUAGE_SEARCH problem type :param llm_name: Name of the specific LLM backend to use to power the chat experience. :type llm_name: str :param num_completion_tokens: Default for maximum number of tokens for chat answers. :type num_completion_tokens: int :param system_message: The generative LLM system message. :type system_message: str :param temperature: The generative LLM temperature. :type temperature: float :param search_score_cutoff: Cutoff for the document retriever score. Matching search results below this score will be ignored. :type search_score_cutoff: float :param ignore_documents: If True, will ignore any documents and search results, and only use the messages to generate a response. :type ignore_documents: bool .. py:attribute:: llm_name :type: str :value: None .. py:attribute:: num_completion_tokens :type: int :value: None .. py:attribute:: system_message :type: str :value: None .. py:attribute:: temperature :type: float :value: None .. py:attribute:: search_score_cutoff :type: float :value: None .. py:attribute:: ignore_documents :type: bool :value: None .. py:method:: __post_init__() .. py:class:: FeatureStorePredictionArguments Bases: :py:obj:`PredictionArguments` Prediction arguments for the FEATURE_STORE problem type :param limit_results: If provided, will limit the number of results to the value specified. :type limit_results: int .. py:attribute:: limit_results :type: int :value: None .. py:method:: __post_init__() .. py:class:: _PredictionArgumentsFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'problem_type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: VectorStoreTextEncoder Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: E5 :value: 'E5' .. py:attribute:: OPENAI :value: 'OPENAI' .. py:attribute:: OPENAI_COMPACT :value: 'OPENAI_COMPACT' .. py:attribute:: OPENAI_LARGE :value: 'OPENAI_LARGE' .. py:attribute:: SENTENCE_BERT :value: 'SENTENCE_BERT' .. py:attribute:: E5_SMALL :value: 'E5_SMALL' .. py:attribute:: CODE_BERT :value: 'CODE_BERT' .. py:class:: VectorStoreConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Config for indexing options of a document retriever. Default values of optional arguments are heuristically selected by the Abacus.AI platform based on the underlying data. :param chunk_size: The size of text chunks in the vector store. :type chunk_size: int :param chunk_overlap_fraction: The fraction of overlap between chunks. :type chunk_overlap_fraction: float :param text_encoder: Encoder used to index texts from the documents. :type text_encoder: VectorStoreTextEncoder :param chunk_size_factors: Chunking data with multiple sizes. The specified list of factors are used to calculate more sizes, in addition to `chunk_size`. :type chunk_size_factors: list :param score_multiplier_column: If provided, will use the values in this metadata column to modify the relevance score of returned chunks for all queries. :type score_multiplier_column: str :param prune_vectors: Transform vectors using SVD so that the average component of vectors in the corpus are removed. :type prune_vectors: bool :param index_metadata_columns: If True, metadata columns of the FG will also be used for indexing and querying. :type index_metadata_columns: bool :param use_document_summary: If True, uses the summary of the document in addition to chunks of the document for indexing and querying. :type use_document_summary: bool :param summary_instructions: Instructions for the LLM to generate the document summary. :type summary_instructions: str :param standalone_deployment: If True, the document retriever will be deployed as a standalone deployment. :type standalone_deployment: bool .. py:attribute:: chunk_size :type: int :value: None .. py:attribute:: chunk_overlap_fraction :type: float :value: None .. py:attribute:: text_encoder :type: abacusai.api_class.enums.VectorStoreTextEncoder :value: None .. py:attribute:: chunk_size_factors :type: list :value: None .. py:attribute:: score_multiplier_column :type: str :value: None .. py:attribute:: prune_vectors :type: bool :value: None .. py:attribute:: index_metadata_columns :type: bool :value: None .. py:attribute:: use_document_summary :type: bool :value: None .. py:attribute:: summary_instructions :type: str :value: None .. py:attribute:: standalone_deployment :type: bool :value: False .. py:data:: DocumentRetrieverConfig .. py:function:: deprecated_enums(*enum_values) .. py:class:: ApiEnum Bases: :py:obj:`enum.Enum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: __deprecated_values__ :value: [] .. py:method:: is_deprecated() .. py:method:: __eq__(other) .. py:method:: __hash__() .. py:class:: ProblemType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AI_AGENT :value: 'ai_agent' .. py:attribute:: EVENT_ANOMALY :value: 'event_anomaly' .. py:attribute:: CLUSTERING :value: 'clustering' .. py:attribute:: CLUSTERING_TIMESERIES :value: 'clustering_timeseries' .. py:attribute:: CUMULATIVE_FORECASTING :value: 'cumulative_forecasting' .. py:attribute:: NAMED_ENTITY_EXTRACTION :value: 'nlp_ner' .. py:attribute:: NATURAL_LANGUAGE_SEARCH :value: 'nlp_search' .. py:attribute:: CHAT_LLM :value: 'chat_llm' .. py:attribute:: SENTENCE_BOUNDARY_DETECTION :value: 'nlp_sentence_boundary_detection' .. py:attribute:: SENTIMENT_DETECTION :value: 'nlp_sentiment' .. py:attribute:: DOCUMENT_CLASSIFICATION :value: 'nlp_classification' .. py:attribute:: DOCUMENT_SUMMARIZATION :value: 'nlp_summarization' .. py:attribute:: DOCUMENT_VISUALIZATION :value: 'nlp_document_visualization' .. py:attribute:: PERSONALIZATION :value: 'personalization' .. py:attribute:: PREDICTIVE_MODELING :value: 'regression' .. py:attribute:: FINETUNED_LLM :value: 'finetuned_llm' .. py:attribute:: FORECASTING :value: 'forecasting' .. py:attribute:: CUSTOM_TRAINED_MODEL :value: 'plug_and_play' .. py:attribute:: CUSTOM_ALGORITHM :value: 'trainable_plug_and_play' .. py:attribute:: FEATURE_STORE :value: 'feature_store' .. py:attribute:: IMAGE_CLASSIFICATION :value: 'vision_classification' .. py:attribute:: OBJECT_DETECTION :value: 'vision_object_detection' .. py:attribute:: IMAGE_VALUE_PREDICTION :value: 'vision_regression' .. py:attribute:: MODEL_MONITORING :value: 'model_monitoring' .. py:attribute:: LANGUAGE_DETECTION :value: 'language_detection' .. py:attribute:: OPTIMIZATION :value: 'optimization' .. py:attribute:: PRETRAINED_MODELS :value: 'pretrained' .. py:attribute:: THEME_ANALYSIS :value: 'theme_analysis' .. py:attribute:: TS_ANOMALY :value: 'ts_anomaly' .. py:class:: RegressionObjective Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUC :value: 'auc' .. py:attribute:: ACCURACY :value: 'acc' .. py:attribute:: LOG_LOSS :value: 'log_loss' .. py:attribute:: PRECISION :value: 'precision' .. py:attribute:: RECALL :value: 'recall' .. py:attribute:: F1_SCORE :value: 'fscore' .. py:attribute:: MAE :value: 'mae' .. py:attribute:: MAPE :value: 'mape' .. py:attribute:: WAPE :value: 'wape' .. py:attribute:: RMSE :value: 'rmse' .. py:attribute:: R_SQUARED_COEFFICIENT_OF_DETERMINATION :value: 'r^2' .. py:class:: RegressionTreeHPOMode Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: RAPID :value: 'rapid' .. py:attribute:: THOROUGH :value: 'thorough' .. py:class:: PartialDependenceAnalysis Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: RAPID :value: 'rapid' .. py:attribute:: THOROUGH :value: 'thorough' .. py:class:: RegressionAugmentationStrategy Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: SMOTE :value: 'smote' .. py:attribute:: RESAMPLE :value: 'resample' .. py:class:: RegressionTargetTransform Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: LOG :value: 'log' .. py:attribute:: QUANTILE :value: 'quantile' .. py:attribute:: YEO_JOHNSON :value: 'yeo-johnson' .. py:attribute:: BOX_COX :value: 'box-cox' .. py:class:: RegressionTypeOfSplit Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: RANDOM :value: 'Random Sampling' .. py:attribute:: TIMESTAMP_BASED :value: 'Timestamp Based' .. py:attribute:: ROW_INDICATOR_BASED :value: 'Row Indicator Based' .. py:attribute:: STRATIFIED_RANDOM_SAMPLING :value: 'Stratified Random Sampling' .. py:class:: RegressionTimeSplitMethod Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: TEST_SPLIT_PERCENTAGE_BASED :value: 'Test Split Percentage Based' .. py:attribute:: TEST_START_TIMESTAMP_BASED :value: 'Test Start Timestamp Based' .. py:class:: RegressionLossFunction Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: HUBER :value: 'Huber' .. py:attribute:: MSE :value: 'Mean Squared Error' .. py:attribute:: MAE :value: 'Mean Absolute Error' .. py:attribute:: MAPE :value: 'Mean Absolute Percentage Error' .. py:attribute:: MSLE :value: 'Mean Squared Logarithmic Error' .. py:attribute:: TWEEDIE :value: 'Tweedie' .. py:attribute:: CROSS_ENTROPY :value: 'Cross Entropy' .. py:attribute:: FOCAL_CROSS_ENTROPY :value: 'Focal Cross Entropy' .. py:attribute:: AUTOMATIC :value: 'Automatic' .. py:attribute:: CUSTOM :value: 'Custom' .. py:class:: ExplainerType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: KERNEL_EXPLAINER :value: 'KERNEL_EXPLAINER' .. py:attribute:: LIME_EXPLAINER :value: 'LIME_EXPLAINER' .. py:attribute:: TREE_EXPLAINER :value: 'TREE_EXPLAINER' .. py:attribute:: EBM_EXPLAINER :value: 'EBM_EXPLAINER' .. py:class:: SamplingMethodType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: N_SAMPLING :value: 'N_SAMPLING' .. py:attribute:: PERCENT_SAMPLING :value: 'PERCENT_SAMPLING' .. py:class:: MergeMode Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: LAST_N :value: 'LAST_N' .. py:attribute:: TIME_WINDOW :value: 'TIME_WINDOW' .. py:class:: OperatorType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: UNPIVOT :value: 'UNPIVOT' .. py:attribute:: MARKDOWN :value: 'MARKDOWN' .. py:attribute:: CRAWLER :value: 'CRAWLER' .. py:attribute:: EXTRACT_DOCUMENT_DATA :value: 'EXTRACT_DOCUMENT_DATA' .. py:attribute:: DATA_GENERATION :value: 'DATA_GENERATION' .. py:attribute:: UNION :value: 'UNION' .. py:class:: MarkdownOperatorInputType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: HTML :value: 'HTML' .. py:class:: FillLogic Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AVERAGE :value: 'average' .. py:attribute:: MAX :value: 'max' .. py:attribute:: MEDIAN :value: 'median' .. py:attribute:: MIN :value: 'min' .. py:attribute:: CUSTOM :value: 'custom' .. py:attribute:: BACKFILL :value: 'bfill' .. py:attribute:: FORWARDFILL :value: 'ffill' .. py:attribute:: LINEAR :value: 'linear' .. py:attribute:: NEAREST :value: 'nearest' .. py:class:: BatchSize Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: BATCH_8 :value: 8 .. py:attribute:: BATCH_16 :value: 16 .. py:attribute:: BATCH_32 :value: 32 .. py:attribute:: BATCH_64 :value: 64 .. py:attribute:: BATCH_128 :value: 128 .. py:attribute:: BATCH_256 :value: 256 .. py:attribute:: BATCH_384 :value: 384 .. py:attribute:: BATCH_512 :value: 512 .. py:attribute:: BATCH_740 :value: 740 .. py:attribute:: BATCH_1024 :value: 1024 .. py:class:: HolidayCalendars Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AU :value: 'AU' .. py:attribute:: UK :value: 'UK' .. py:attribute:: US :value: 'US' .. py:class:: FileFormat Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AVRO :value: 'AVRO' .. py:attribute:: PARQUET :value: 'PARQUET' .. py:attribute:: TFRECORD :value: 'TFRECORD' .. py:attribute:: TSV :value: 'TSV' .. py:attribute:: CSV :value: 'CSV' .. py:attribute:: ORC :value: 'ORC' .. py:attribute:: JSON :value: 'JSON' .. py:attribute:: ODS :value: 'ODS' .. py:attribute:: XLS :value: 'XLS' .. py:attribute:: GZ :value: 'GZ' .. py:attribute:: ZIP :value: 'ZIP' .. py:attribute:: TAR :value: 'TAR' .. py:attribute:: DOCX :value: 'DOCX' .. py:attribute:: PDF :value: 'PDF' .. py:attribute:: MD :value: 'md' .. py:attribute:: RAR :value: 'RAR' .. py:attribute:: GIF :value: 'GIF' .. py:attribute:: JPEG :value: 'JPG' .. py:attribute:: PNG :value: 'PNG' .. py:attribute:: TIF :value: 'TIFF' .. py:attribute:: NUMBERS :value: 'NUMBERS' .. py:attribute:: PPTX :value: 'PPTX' .. py:attribute:: PPT :value: 'PPT' .. py:attribute:: HTML :value: 'HTML' .. py:attribute:: TXT :value: 'txt' .. py:attribute:: EML :value: 'eml' .. py:attribute:: MP3 :value: 'MP3' .. py:attribute:: MP4 :value: 'MP4' .. py:attribute:: FLV :value: 'flv' .. py:attribute:: MOV :value: 'mov' .. py:attribute:: MPG :value: 'mpg' .. py:attribute:: MPEG :value: 'mpeg' .. py:attribute:: WEBP :value: 'webp' .. py:attribute:: WEBM :value: 'webm' .. py:attribute:: WMV :value: 'wmv' .. py:attribute:: MSG :value: 'msg' .. py:class:: ExperimentationMode Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: RAPID :value: 'rapid' .. py:attribute:: THOROUGH :value: 'thorough' .. py:class:: PersonalizationTrainingMode Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: EXPERIMENTAL :value: 'EXP' .. py:attribute:: PRODUCTION :value: 'PROD' .. py:class:: PersonalizationObjective Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: NDCG :value: 'ndcg' .. py:attribute:: NDCG_5 :value: 'ndcg@5' .. py:attribute:: NDCG_10 :value: 'ndcg@10' .. py:attribute:: MAP :value: 'map' .. py:attribute:: MAP_5 :value: 'map@5' .. py:attribute:: MAP_10 :value: 'map@10' .. py:attribute:: MRR :value: 'mrr' .. py:attribute:: PERSONALIZATION :value: 'personalization@10' .. py:attribute:: COVERAGE :value: 'coverage' .. py:class:: ForecastingObjective Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ACCURACY :value: 'w_c_accuracy' .. py:attribute:: WAPE :value: 'wape' .. py:attribute:: MAPE :value: 'mape' .. py:attribute:: CMAPE :value: 'cmape' .. py:attribute:: RMSE :value: 'rmse' .. py:attribute:: CV :value: 'coefficient_of_variation' .. py:attribute:: BIAS :value: 'bias' .. py:attribute:: SRMSE :value: 'srmse' .. py:class:: ForecastingFrequency Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: HOURLY :value: '1H' .. py:attribute:: DAILY :value: '1D' .. py:attribute:: WEEKLY_SUNDAY_START :value: '1W' .. py:attribute:: WEEKLY_MONDAY_START :value: 'W-MON' .. py:attribute:: WEEKLY_SATURDAY_START :value: 'W-SAT' .. py:attribute:: MONTH_START :value: 'MS' .. py:attribute:: MONTH_END :value: '1M' .. py:attribute:: QUARTER_START :value: 'QS' .. py:attribute:: QUARTER_END :value: '1Q' .. py:attribute:: YEARLY :value: '1Y' .. py:class:: ForecastingDataSplitType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUTO :value: 'Automatic Time Based' .. py:attribute:: TIMESTAMP :value: 'Timestamp Based' .. py:attribute:: ITEM :value: 'Item Based' .. py:attribute:: PREDICTION_LENGTH :value: 'Force Prediction Length' .. py:attribute:: L_SHAPED_AUTO :value: 'L-shaped Split - Automatic Time Based' .. py:attribute:: L_SHAPED_TIMESTAMP :value: 'L-shaped Split - Timestamp Based' .. py:class:: ForecastingLossFunction Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: CUSTOM :value: 'Custom' .. py:attribute:: MEAN_ABSOLUTE_ERROR :value: 'mae' .. py:attribute:: NORMALIZED_MEAN_ABSOLUTE_ERROR :value: 'nmae' .. py:attribute:: PEAKS_MEAN_ABSOLUTE_ERROR :value: 'peaks_mae' .. py:attribute:: MEAN_ABSOLUTE_PERCENTAGE_ERROR :value: 'stable_mape' .. py:attribute:: POINTWISE_ACCURACY :value: 'accuracy' .. py:attribute:: ROOT_MEAN_SQUARE_ERROR :value: 'rmse' .. py:attribute:: NORMALIZED_ROOT_MEAN_SQUARE_ERROR :value: 'nrmse' .. py:attribute:: ASYMMETRIC_MEAN_ABSOLUTE_PERCENTAGE_ERROR :value: 'asymmetric_mape' .. py:attribute:: STABLE_STANDARDIZED_MEAN_ABSOLUTE_PERCENTAGE_ERROR :value: 'stable_standardized_mape_with_cmape' .. py:attribute:: GAUSSIAN :value: 'mle_gaussian_local' .. py:attribute:: GAUSSIAN_FULL_COVARIANCE :value: 'mle_gaussfullcov' .. py:attribute:: GUASSIAN_EXPONENTIAL :value: 'mle_gaussexp' .. py:attribute:: MIX_GAUSSIANS :value: 'mle_gaussmix' .. py:attribute:: WEIBULL :value: 'mle_weibull' .. py:attribute:: NEGATIVE_BINOMIAL :value: 'mle_negbinom' .. py:attribute:: LOG_ROOT_MEAN_SQUARE_ERROR :value: 'log_rmse' .. py:class:: ForecastingLocalScaling Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ZSCORE :value: 'zscore' .. py:attribute:: SLIDING_ZSCORE :value: 'sliding_zscore' .. py:attribute:: LAST_POINT :value: 'lastpoint' .. py:attribute:: MIN_MAX :value: 'minmax' .. py:attribute:: MIN_STD :value: 'minstd' .. py:attribute:: ROBUST :value: 'robust' .. py:attribute:: ITEM :value: 'item' .. py:class:: ForecastingFillMethod Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: BACK :value: 'BACK' .. py:attribute:: MIDDLE :value: 'MIDDLE' .. py:attribute:: FUTURE :value: 'FUTURE' .. py:class:: ForecastingQuanitlesExtensionMethod Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: DIRECT :value: 'direct' .. py:attribute:: QUADRATIC :value: 'quadratic' .. py:attribute:: ANCESTRAL_SIMULATION :value: 'simulation' .. py:class:: TimeseriesAnomalyDataSplitType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUTO :value: 'Automatic Time Based' .. py:attribute:: TIMESTAMP :value: 'Fixed Timestamp Based' .. py:class:: TimeseriesAnomalyTypeOfAnomaly Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: HIGH_PEAK :value: 'high_peak' .. py:attribute:: LOW_PEAK :value: 'low_peak' .. py:class:: TimeseriesAnomalyUseHeuristic Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ENABLE :value: 'enable' .. py:attribute:: DISABLE :value: 'disable' .. py:attribute:: AUTOMATIC :value: 'automatic' .. py:class:: NERObjective Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: LOG_LOSS :value: 'log_loss' .. py:attribute:: AUC :value: 'auc' .. py:attribute:: PRECISION :value: 'precision' .. py:attribute:: RECALL :value: 'recall' .. py:attribute:: ANNOTATIONS_PRECISION :value: 'annotations_precision' .. py:attribute:: ANNOTATIONS_RECALL :value: 'annotations_recall' .. py:class:: NERModelType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: PRETRAINED_BERT :value: 'pretrained_bert' .. py:attribute:: PRETRAINED_ROBERTA_27 :value: 'pretrained_roberta_27' .. py:attribute:: PRETRAINED_ROBERTA_43 :value: 'pretrained_roberta_43' .. py:attribute:: PRETRAINED_MULTILINGUAL :value: 'pretrained_multilingual' .. py:attribute:: LEARNED :value: 'learned' .. py:class:: NLPDocumentFormat Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUTO :value: 'auto' .. py:attribute:: TEXT :value: 'text' .. py:attribute:: DOC :value: 'doc' .. py:attribute:: TOKENS :value: 'tokens' .. py:class:: SentimentType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: VALENCE :value: 'valence' .. py:attribute:: EMOTION :value: 'emotion' .. py:class:: ClusteringImputationMethod Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUTOMATIC :value: 'Automatic' .. py:attribute:: ZEROS :value: 'Zeros' .. py:attribute:: INTERPOLATE :value: 'Interpolate' .. py:class:: ConnectorType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: FILE :value: 'FILE' .. py:attribute:: DATABASE :value: 'DATABASE' .. py:attribute:: STREAMING :value: 'STREAMING' .. py:attribute:: APPLICATION :value: 'APPLICATION' .. py:class:: ApplicationConnectorType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: GOOGLEANALYTICS :value: 'GOOGLEANALYTICS' .. py:attribute:: GOOGLEDRIVE :value: 'GOOGLEDRIVE' .. py:attribute:: GIT :value: 'GIT' .. py:attribute:: CONFLUENCE :value: 'CONFLUENCE' .. py:attribute:: JIRA :value: 'JIRA' .. py:attribute:: ONEDRIVE :value: 'ONEDRIVE' .. py:attribute:: ZENDESK :value: 'ZENDESK' .. py:attribute:: SLACK :value: 'SLACK' .. py:attribute:: SHAREPOINT :value: 'SHAREPOINT' .. py:attribute:: TEAMS :value: 'TEAMS' .. py:attribute:: ABACUSUSAGEMETRICS :value: 'ABACUSUSAGEMETRICS' .. py:attribute:: MICROSOFTAUTH :value: 'MICROSOFTAUTH' .. py:attribute:: FRESHSERVICE :value: 'FRESHSERVICE' .. py:attribute:: ZENDESKSUNSHINEMESSAGING :value: 'ZENDESKSUNSHINEMESSAGING' .. py:attribute:: GOOGLEDRIVEUSER :value: 'GOOGLEDRIVEUSER' .. py:attribute:: GOOGLEWORKSPACEUSER :value: 'GOOGLEWORKSPACEUSER' .. py:attribute:: GMAILUSER :value: 'GMAILUSER' .. py:attribute:: GOOGLECALENDAR :value: 'GOOGLECALENDAR' .. py:attribute:: GOOGLESHEETS :value: 'GOOGLESHEETS' .. py:attribute:: GOOGLEDOCS :value: 'GOOGLEDOCS' .. py:attribute:: TEAMSSCRAPER :value: 'TEAMSSCRAPER' .. py:attribute:: GITHUBUSER :value: 'GITHUBUSER' .. py:attribute:: OKTASAML :value: 'OKTASAML' .. py:attribute:: BOX :value: 'BOX' .. py:attribute:: SFTPAPPLICATION :value: 'SFTPAPPLICATION' .. py:attribute:: OAUTH :value: 'OAUTH' .. py:class:: StreamingConnectorType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: KAFKA :value: 'KAFKA' .. py:class:: PythonFunctionArgumentType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: FEATURE_GROUP :value: 'FEATURE_GROUP' .. py:attribute:: INTEGER :value: 'INTEGER' .. py:attribute:: STRING :value: 'STRING' .. py:attribute:: BOOLEAN :value: 'BOOLEAN' .. py:attribute:: FLOAT :value: 'FLOAT' .. py:attribute:: JSON :value: 'JSON' .. py:attribute:: LIST :value: 'LIST' .. py:attribute:: DATASET_ID :value: 'DATASET_ID' .. py:attribute:: MODEL_ID :value: 'MODEL_ID' .. py:attribute:: FEATURE_GROUP_ID :value: 'FEATURE_GROUP_ID' .. py:attribute:: MONITOR_ID :value: 'MONITOR_ID' .. py:attribute:: BATCH_PREDICTION_ID :value: 'BATCH_PREDICTION_ID' .. py:attribute:: DEPLOYMENT_ID :value: 'DEPLOYMENT_ID' .. py:attribute:: ATTACHMENT :value: 'ATTACHMENT' .. py:class:: PythonFunctionOutputArgumentType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: NTEGER :value: 'INTEGER' .. py:attribute:: STRING :value: 'STRING' .. py:attribute:: BOOLEAN :value: 'BOOLEAN' .. py:attribute:: FLOAT :value: 'FLOAT' .. py:attribute:: JSON :value: 'JSON' .. py:attribute:: LIST :value: 'LIST' .. py:attribute:: DATASET_ID :value: 'DATASET_ID' .. py:attribute:: MODEL_ID :value: 'MODEL_ID' .. py:attribute:: FEATURE_GROUP_ID :value: 'FEATURE_GROUP_ID' .. py:attribute:: MONITOR_ID :value: 'MONITOR_ID' .. py:attribute:: BATCH_PREDICTION_ID :value: 'BATCH_PREDICTION_ID' .. py:attribute:: DEPLOYMENT_ID :value: 'DEPLOYMENT_ID' .. py:attribute:: ANY :value: 'ANY' .. py:attribute:: ATTACHMENT :value: 'ATTACHMENT' .. py:class:: VectorStoreTextEncoder Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: E5 :value: 'E5' .. py:attribute:: OPENAI :value: 'OPENAI' .. py:attribute:: OPENAI_COMPACT :value: 'OPENAI_COMPACT' .. py:attribute:: OPENAI_LARGE :value: 'OPENAI_LARGE' .. py:attribute:: SENTENCE_BERT :value: 'SENTENCE_BERT' .. py:attribute:: E5_SMALL :value: 'E5_SMALL' .. py:attribute:: CODE_BERT :value: 'CODE_BERT' .. py:class:: LLMName Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: OPENAI_GPT4 :value: 'OPENAI_GPT4' .. py:attribute:: OPENAI_GPT4_32K :value: 'OPENAI_GPT4_32K' .. py:attribute:: OPENAI_GPT4_128K :value: 'OPENAI_GPT4_128K' .. py:attribute:: OPENAI_GPT4_128K_LATEST :value: 'OPENAI_GPT4_128K_LATEST' .. py:attribute:: OPENAI_GPT4O :value: 'OPENAI_GPT4O' .. py:attribute:: OPENAI_GPT4O_MINI :value: 'OPENAI_GPT4O_MINI' .. py:attribute:: OPENAI_O1_MINI :value: 'OPENAI_O1_MINI' .. py:attribute:: OPENAI_GPT3_5 :value: 'OPENAI_GPT3_5' .. py:attribute:: OPENAI_GPT3_5_TEXT :value: 'OPENAI_GPT3_5_TEXT' .. py:attribute:: LLAMA3_1_405B :value: 'LLAMA3_1_405B' .. py:attribute:: LLAMA3_1_70B :value: 'LLAMA3_1_70B' .. py:attribute:: LLAMA3_1_8B :value: 'LLAMA3_1_8B' .. py:attribute:: LLAMA3_3_70B :value: 'LLAMA3_3_70B' .. py:attribute:: LLAMA3_LARGE_CHAT :value: 'LLAMA3_LARGE_CHAT' .. py:attribute:: CLAUDE_V3_OPUS :value: 'CLAUDE_V3_OPUS' .. py:attribute:: CLAUDE_V3_SONNET :value: 'CLAUDE_V3_SONNET' .. py:attribute:: CLAUDE_V3_HAIKU :value: 'CLAUDE_V3_HAIKU' .. py:attribute:: CLAUDE_V3_5_SONNET :value: 'CLAUDE_V3_5_SONNET' .. py:attribute:: CLAUDE_V3_7_SONNET :value: 'CLAUDE_V3_7_SONNET' .. py:attribute:: CLAUDE_V3_5_HAIKU :value: 'CLAUDE_V3_5_HAIKU' .. py:attribute:: GEMINI_1_5_PRO :value: 'GEMINI_1_5_PRO' .. py:attribute:: GEMINI_2_FLASH :value: 'GEMINI_2_FLASH' .. py:attribute:: GEMINI_2_FLASH_THINKING :value: 'GEMINI_2_FLASH_THINKING' .. py:attribute:: GEMINI_2_PRO :value: 'GEMINI_2_PRO' .. py:attribute:: ABACUS_SMAUG3 :value: 'ABACUS_SMAUG3' .. py:attribute:: ABACUS_DRACARYS :value: 'ABACUS_DRACARYS' .. py:attribute:: QWEN_2_5_32B :value: 'QWEN_2_5_32B' .. py:attribute:: QWEN_2_5_32B_BASE :value: 'QWEN_2_5_32B_BASE' .. py:attribute:: QWEN_2_5_72B :value: 'QWEN_2_5_72B' .. py:attribute:: QWQ_32B :value: 'QWQ_32B' .. py:attribute:: GEMINI_1_5_FLASH :value: 'GEMINI_1_5_FLASH' .. py:attribute:: XAI_GROK :value: 'XAI_GROK' .. py:attribute:: DEEPSEEK_V3 :value: 'DEEPSEEK_V3' .. py:attribute:: DEEPSEEK_R1 :value: 'DEEPSEEK_R1' .. py:class:: MonitorAlertType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ACCURACY_BELOW_THRESHOLD :value: 'AccuracyBelowThreshold' .. py:attribute:: FEATURE_DRIFT :value: 'FeatureDrift' .. py:attribute:: DATA_INTEGRITY_VIOLATIONS :value: 'DataIntegrityViolations' .. py:attribute:: BIAS_VIOLATIONS :value: 'BiasViolations' .. py:attribute:: HISTORY_LENGTH_DRIFT :value: 'HistoryLengthDrift' .. py:attribute:: TARGET_DRIFT :value: 'TargetDrift' .. py:attribute:: PREDICTION_COUNT :value: 'PredictionCount' .. py:class:: FeatureDriftType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: KL :value: 'kl' .. py:attribute:: KS :value: 'ks' .. py:attribute:: WS :value: 'ws' .. py:attribute:: JS :value: 'js' .. py:attribute:: PSI :value: 'psi' .. py:attribute:: CHI_SQUARE :value: 'chi_square' .. py:attribute:: CSI :value: 'csi' .. py:class:: DataIntegrityViolationType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: NULL_VIOLATIONS :value: 'null_violations' .. py:attribute:: RANGE_VIOLATIONS :value: 'range_violations' .. py:attribute:: CATEGORICAL_RANGE_VIOLATION :value: 'categorical_range_violations' .. py:attribute:: TOTAL_VIOLATIONS :value: 'total_violations' .. py:class:: BiasType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: DEMOGRAPHIC_PARITY :value: 'demographic_parity' .. py:attribute:: EQUAL_OPPORTUNITY :value: 'equal_opportunity' .. py:attribute:: GROUP_BENEFIT_EQUALITY :value: 'group_benefit' .. py:attribute:: TOTAL :value: 'total' .. py:class:: AlertActionType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: EMAIL :value: 'Email' .. py:class:: PythonFunctionType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: FEATURE_GROUP :value: 'FEATURE_GROUP' .. py:attribute:: PLOTLY_FIG :value: 'PLOTLY_FIG' .. py:attribute:: STEP_FUNCTION :value: 'STEP_FUNCTION' .. py:attribute:: USERCODE_TOOL :value: 'USERCODE_TOOL' .. py:attribute:: CONNECTOR_TOOL :value: 'CONNECTOR_TOOL' .. py:class:: EvalArtifactType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: FORECASTING_ACCURACY :value: 'bar_chart' .. py:attribute:: FORECASTING_VOLUME :value: 'bar_chart_volume' .. py:attribute:: FORECASTING_HISTORY_LENGTH_ACCURACY :value: 'bar_chart_accuracy_by_history' .. py:class:: FieldDescriptorType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: STRING :value: 'STRING' .. py:attribute:: INTEGER :value: 'INTEGER' .. py:attribute:: FLOAT :value: 'FLOAT' .. py:attribute:: BOOLEAN :value: 'BOOLEAN' .. py:attribute:: DATETIME :value: 'DATETIME' .. py:attribute:: DATE :value: 'DATE' .. py:class:: WorkflowNodeInputType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: USER_INPUT :value: 'USER_INPUT' .. py:attribute:: WORKFLOW_VARIABLE :value: 'WORKFLOW_VARIABLE' .. py:attribute:: IGNORE :value: 'IGNORE' .. py:attribute:: CONSTANT :value: 'CONSTANT' .. py:class:: WorkflowNodeOutputType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ATTACHMENT :value: 'ATTACHMENT' .. py:attribute:: BOOLEAN :value: 'BOOLEAN' .. py:attribute:: FLOAT :value: 'FLOAT' .. py:attribute:: INTEGER :value: 'INTEGER' .. py:attribute:: DICT :value: 'DICT' .. py:attribute:: LIST :value: 'LIST' .. py:attribute:: STRING :value: 'STRING' .. py:attribute:: RUNTIME_SCHEMA :value: 'RUNTIME_SCHEMA' .. py:attribute:: ANY :value: 'ANY' .. py:method:: normalize_type(python_type) :classmethod: .. py:class:: OcrMode Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AUTO :value: 'AUTO' .. py:attribute:: DEFAULT :value: 'DEFAULT' .. py:attribute:: LAYOUT :value: 'LAYOUT' .. py:attribute:: SCANNED :value: 'SCANNED' .. py:attribute:: COMPREHENSIVE :value: 'COMPREHENSIVE' .. py:attribute:: COMPREHENSIVE_V2 :value: 'COMPREHENSIVE_V2' .. py:attribute:: COMPREHENSIVE_TABLE_MD :value: 'COMPREHENSIVE_TABLE_MD' .. py:attribute:: COMPREHENSIVE_FORM_MD :value: 'COMPREHENSIVE_FORM_MD' .. py:attribute:: COMPREHENSIVE_FORM_AND_TABLE_MD :value: 'COMPREHENSIVE_FORM_AND_TABLE_MD' .. py:attribute:: TESSERACT_FAST :value: 'TESSERACT_FAST' .. py:attribute:: LLM :value: 'LLM' .. py:attribute:: AUGMENTED_LLM :value: 'AUGMENTED_LLM' .. py:method:: aws_ocr_modes() :classmethod: .. py:class:: DocumentType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: SIMPLE_TEXT :value: 'SIMPLE_TEXT' .. py:attribute:: TEXT :value: 'TEXT' .. py:attribute:: TABLES_AND_FORMS :value: 'TABLES_AND_FORMS' .. py:attribute:: EMBEDDED_IMAGES :value: 'EMBEDDED_IMAGES' .. py:attribute:: SCANNED_TEXT :value: 'SCANNED_TEXT' .. py:attribute:: COMPREHENSIVE_MARKDOWN :value: 'COMPREHENSIVE_MARKDOWN' .. py:method:: is_ocr_forced(document_type) :classmethod: .. py:class:: StdDevThresholdType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ABSOLUTE :value: 'ABSOLUTE' .. py:attribute:: PERCENTILE :value: 'PERCENTILE' .. py:attribute:: STDDEV :value: 'STDDEV' .. py:class:: DataType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: INTEGER :value: 'integer' .. py:attribute:: FLOAT :value: 'float' .. py:attribute:: STRING :value: 'string' .. py:attribute:: DATE :value: 'date' .. py:attribute:: DATETIME :value: 'datetime' .. py:attribute:: BOOLEAN :value: 'boolean' .. py:attribute:: LIST :value: 'list' .. py:attribute:: STRUCT :value: 'struct' .. py:attribute:: NULL :value: 'null' .. py:attribute:: BINARY :value: 'binary' .. py:class:: AgentInterface Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: DEFAULT :value: 'DEFAULT' .. py:attribute:: CHAT :value: 'CHAT' .. py:attribute:: MATRIX :value: 'MATRIX' .. py:attribute:: AUTONOMOUS :value: 'AUTONOMOUS' .. py:class:: WorkflowNodeTemplateType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: TRIGGER :value: 'trigger' .. py:attribute:: DEFAULT :value: 'default' .. py:class:: ProjectConfigType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: CONSTRAINTS :value: 'CONSTRAINTS' .. py:attribute:: CHAT_FEEDBACK :value: 'CHAT_FEEDBACK' .. py:attribute:: REVIEW_MODE :value: 'REVIEW_MODE' .. py:class:: CPUSize Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: SMALL :value: 'small' .. py:attribute:: MEDIUM :value: 'medium' .. py:attribute:: LARGE :value: 'large' .. py:class:: MemorySize Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: SMALL :value: 16 .. py:attribute:: MEDIUM :value: 32 .. py:attribute:: LARGE :value: 64 .. py:attribute:: XLARGE :value: 128 .. py:method:: from_value(value) :classmethod: .. py:class:: ResponseSectionType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: AGENT_FLOW_BUTTON :value: 'agent_flow_button' .. py:attribute:: ATTACHMENTS :value: 'attachments' .. py:attribute:: BASE64_IMAGE :value: 'base64_image' .. py:attribute:: CHART :value: 'chart' .. py:attribute:: CODE :value: 'code' .. py:attribute:: COLLAPSIBLE_COMPONENT :value: 'collapsible_component' .. py:attribute:: IMAGE_URL :value: 'image_url' .. py:attribute:: RUNTIME_SCHEMA :value: 'runtime_schema' .. py:attribute:: LIST :value: 'list' .. py:attribute:: TABLE :value: 'table' .. py:attribute:: TEXT :value: 'text' .. py:class:: CodeLanguage Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: PYTHON :value: 'python' .. py:attribute:: SQL :value: 'sql' .. py:class:: DeploymentConversationType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: CHAT_LLM :value: 'CHATLLM' .. py:attribute:: SIMPLE_AGENT :value: 'SIMPLE_AGENT' .. py:attribute:: COMPLEX_AGENT :value: 'COMPLEX_AGENT' .. py:attribute:: WORKFLOW_AGENT :value: 'WORKFLOW_AGENT' .. py:attribute:: COPILOT :value: 'COPILOT' .. py:attribute:: AGENT_CONTROLLER :value: 'AGENT_CONTROLLER' .. py:attribute:: CODE_LLM :value: 'CODE_LLM' .. py:attribute:: CODE_LLM_AGENT :value: 'CODE_LLM_AGENT' .. py:attribute:: CHAT_LLM_TASK :value: 'CHAT_LLM_TASK' .. py:attribute:: COMPUTER_AGENT :value: 'COMPUTER_AGENT' .. py:attribute:: SEARCH_LLM :value: 'SEARCH_LLM' .. py:attribute:: APP_LLM :value: 'APP_LLM' .. py:attribute:: TEST_AGENT :value: 'TEST_AGENT' .. py:class:: AgentClientType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: CHAT_UI :value: 'CHAT_UI' .. py:attribute:: MESSAGING_APP :value: 'MESSAGING_APP' .. py:attribute:: API :value: 'API' .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: DocumentProcessingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Document processing configuration. :param document_type: Type of document. Can be one of Text, Tables and Forms, Embedded Images, etc. If not specified, type will be decided automatically. :type document_type: DocumentType :param highlight_relevant_text: Whether to extract bounding boxes and highlight relevant text in search results. Defaults to False. :type highlight_relevant_text: bool :param extract_bounding_boxes: Whether to perform OCR and extract bounding boxes. If False, no OCR will be done but only the embedded text from digital documents will be extracted. Defaults to False. :type extract_bounding_boxes: bool :param ocr_mode: OCR mode. There are different OCR modes available for different kinds of documents and use cases. This option only takes effect when extract_bounding_boxes is True. :type ocr_mode: OcrMode :param use_full_ocr: Whether to perform full OCR. If True, OCR will be performed on the full page. If False, OCR will be performed on the non-text regions only. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type use_full_ocr: bool :param remove_header_footer: Whether to remove headers and footers. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type remove_header_footer: bool :param remove_watermarks: Whether to remove watermarks. By default, it will be decided automatically based on the OCR mode and the document type. This option only takes effect when extract_bounding_boxes is True. :type remove_watermarks: bool :param convert_to_markdown: Whether to convert extracted text to markdown. Defaults to False. This option only takes effect when extract_bounding_boxes is True. :type convert_to_markdown: bool :param mask_pii: Whether to mask personally identifiable information (PII) in the document text/tokens. Defaults to False. :type mask_pii: bool :param extract_images: Whether to extract images from the document e.g. diagrams in a PDF page. Defaults to False. :type extract_images: bool .. py:attribute:: document_type :type: abacusai.api_class.enums.DocumentType :value: None .. py:attribute:: highlight_relevant_text :type: bool :value: None .. py:attribute:: extract_bounding_boxes :type: bool :value: False .. py:attribute:: ocr_mode :type: abacusai.api_class.enums.OcrMode .. py:attribute:: use_full_ocr :type: bool :value: None .. py:attribute:: remove_header_footer :type: bool :value: False .. py:attribute:: remove_watermarks :type: bool :value: True .. py:attribute:: convert_to_markdown :type: bool :value: False .. py:attribute:: mask_pii :type: bool :value: False .. py:attribute:: extract_images :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _detect_ocr_mode() .. py:method:: _get_filtered_dict(config) :classmethod: Filters out default values from the config .. py:class:: SamplingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for the sampling config of a feature group .. py:attribute:: sampling_method :type: abacusai.api_class.enums.SamplingMethodType :value: None .. py:method:: _get_builder() :classmethod: .. py:method:: __post_init__() .. py:class:: NSamplingConfig Bases: :py:obj:`SamplingConfig` The number of distinct values of the key columns to include in the sample, or number of rows if key columns not specified. :param sample_count: The number of rows to include in the sample :type sample_count: int :param key_columns: The feature(s) to use as the key(s) when sampling :type key_columns: List[str] .. py:attribute:: sample_count :type: int .. py:attribute:: key_columns :type: List[str] :value: [] .. py:method:: __post_init__() .. py:class:: PercentSamplingConfig Bases: :py:obj:`SamplingConfig` The fraction of distinct values of the feature group to include in the sample. :param sample_percent: The percentage of the rows to sample :type sample_percent: float :param key_columns: The feature(s) to use as the key(s) when sampling :type key_columns: List[str] .. py:attribute:: sample_percent :type: float .. py:attribute:: key_columns :type: List[str] :value: [] .. py:method:: __post_init__() .. py:class:: _SamplingConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_class_key :value: 'sampling_method' .. py:attribute:: config_abstract_class .. py:attribute:: config_class_map .. py:class:: MergeConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for the merge config of a feature group .. py:attribute:: merge_mode :type: abacusai.api_class.enums.MergeMode :value: None .. py:method:: _get_builder() :classmethod: .. py:method:: __post_init__() .. py:class:: LastNMergeConfig Bases: :py:obj:`MergeConfig` Merge LAST N chunks/versions of an incremental dataset. :param num_versions: The number of versions to merge. num_versions == 0 means merge all versions. :type num_versions: int :param include_version_timestamp_column: If set, include a column with the creation timestamp of source FG versions. :type include_version_timestamp_column: bool .. py:attribute:: num_versions :type: int .. py:attribute:: include_version_timestamp_column :type: bool :value: None .. py:method:: __post_init__() .. py:class:: TimeWindowMergeConfig Bases: :py:obj:`MergeConfig` Merge rows within a given timewindow of the most recent timestamp :param feature_name: Time based column to index on :type feature_name: str :param time_window_size_ms: Range of merged rows will be [MAX_TIME - time_window_size_ms, MAX_TIME] :type time_window_size_ms: int :param include_version_timestamp_column: If set, include a column with the creation timestamp of source FG versions. :type include_version_timestamp_column: bool .. py:attribute:: feature_name :type: str .. py:attribute:: time_window_size_ms :type: int .. py:attribute:: include_version_timestamp_column :type: bool :value: None .. py:method:: __post_init__() .. py:class:: _MergeConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_class_key :value: 'merge_mode' .. py:attribute:: config_abstract_class .. py:attribute:: config_class_map .. py:class:: OperatorConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Configuration for a template Feature Group Operation .. py:attribute:: operator_type :type: abacusai.api_class.enums.OperatorType :value: None .. py:method:: _get_builder() :classmethod: .. py:method:: __post_init__() .. py:class:: UnpivotConfig Bases: :py:obj:`OperatorConfig` Unpivot Columns in a FeatureGroup. :param columns: Which columns to unpivot. :type columns: List[str] :param index_column: Name of new column containing the unpivoted column names as its values :type index_column: str :param value_column: Name of new column containing the row values that were unpivoted. :type value_column: str :param exclude: If True, the unpivoted columns are all the columns EXCEPT the ones in the columns argument. Default is False. :type exclude: bool .. py:attribute:: columns :type: List[str] :value: None .. py:attribute:: index_column :type: str :value: None .. py:attribute:: value_column :type: str :value: None .. py:attribute:: exclude :type: bool :value: None .. py:method:: __post_init__() .. py:class:: MarkdownConfig Bases: :py:obj:`OperatorConfig` Transform a input column to a markdown column. :param input_column: Name of input column to transform. :type input_column: str :param output_column: Name of output column to store transformed data. :type output_column: str :param input_column_type: Type of input column to transform. :type input_column_type: MarkdownOperatorInputType .. py:attribute:: input_column :type: str :value: None .. py:attribute:: output_column :type: str :value: None .. py:attribute:: input_column_type :type: abacusai.api_class.enums.MarkdownOperatorInputType :value: None .. py:method:: __post_init__() .. py:class:: CrawlerTransformConfig Bases: :py:obj:`OperatorConfig` Transform a input column of urls to html text :param input_column: Name of input column to transform. :type input_column: str :param output_column: Name of output column to store transformed data. :type output_column: str :param depth_column: Increasing depth explores more links, capturing more content :type depth_column: str :param disable_host_restriction: If True, will not restrict crawling to the same host. :type disable_host_restriction: bool :param honour_website_rules: If True, will respect robots.txt rules. :type honour_website_rules: bool :param user_agent: If provided, will use this user agent instead of randomly selecting one. :type user_agent: str .. py:attribute:: input_column :type: str :value: None .. py:attribute:: output_column :type: str :value: None .. py:attribute:: depth_column :type: str :value: None .. py:attribute:: input_column_type :type: str :value: None .. py:attribute:: crawl_depth :type: int :value: None .. py:attribute:: disable_host_restriction :type: bool :value: None .. py:attribute:: honour_website_rules :type: bool :value: None .. py:attribute:: user_agent :type: str :value: None .. py:method:: __post_init__() .. py:class:: ExtractDocumentDataConfig Bases: :py:obj:`OperatorConfig` Extracts data from documents. :param doc_id_column: Name of input document ID column. :type doc_id_column: str :param document_column: Name of the input document column which contains the page infos. This column will be transformed to include the document processing config in the output feature group. :type document_column: str :param document_processing_config: Document processing configuration. :type document_processing_config: DocumentProcessingConfig .. py:attribute:: doc_id_column :type: str :value: None .. py:attribute:: document_column :type: str :value: None .. py:attribute:: document_processing_config :type: abacusai.api_class.dataset.DocumentProcessingConfig :value: None .. py:method:: __post_init__() .. py:class:: DataGenerationConfig Bases: :py:obj:`OperatorConfig` Generate synthetic data using a model for finetuning an LLM. :param prompt_col: Name of the input prompt column. :type prompt_col: str :param completion_col: Name of the output completion column. :type completion_col: str :param description_col: Name of the description column. :type description_col: str :param id_col: Name of the identifier column. :type id_col: str :param generation_instructions: Instructions for the data generation model. :type generation_instructions: str :param temperature: Sampling temperature for the model. :type temperature: float :param fewshot_examples: Number of fewshot examples used to prompt the model. :type fewshot_examples: int :param concurrency: Number of concurrent processes. :type concurrency: int :param examples_per_target: Number of examples per target. :type examples_per_target: int :param subset_size: Size of the subset to use for generation. :type subset_size: Optional[int] :param verify_response: Whether to verify the response. :type verify_response: bool :param token_budget: Token budget for generation. :type token_budget: int :param oversample: Whether to oversample the data. :type oversample: bool :param documentation_char_limit: Character limit for documentation. :type documentation_char_limit: int :param frequency_penalty: Penalty for frequency of token appearance. :type frequency_penalty: float :param model: Model to use for data generation. :type model: str :param seed: Seed for random number generation. :type seed: Optional[int] .. py:attribute:: prompt_col :type: str :value: None .. py:attribute:: completion_col :type: str :value: None .. py:attribute:: description_col :type: str :value: None .. py:attribute:: id_col :type: str :value: None .. py:attribute:: generation_instructions :type: str :value: None .. py:attribute:: temperature :type: float :value: None .. py:attribute:: fewshot_examples :type: int :value: None .. py:attribute:: concurrency :type: int :value: None .. py:attribute:: examples_per_target :type: int :value: None .. py:attribute:: subset_size :type: int :value: None .. py:attribute:: verify_response :type: bool :value: None .. py:attribute:: token_budget :type: int :value: None .. py:attribute:: oversample :type: bool :value: None .. py:attribute:: documentation_char_limit :type: int :value: None .. py:attribute:: frequency_penalty :type: float :value: None .. py:attribute:: model :type: str :value: None .. py:attribute:: seed :type: int :value: None .. py:method:: __post_init__() .. py:class:: UnionTransformConfig Bases: :py:obj:`OperatorConfig` Takes Union of current feature group with 1 or more selected feature groups of same type. :param feature_group_ids: List of feature group IDs to union with source FG. :type feature_group_ids: List[str] :param drop_non_intersecting_columns: If true, will drop columns that are not present in all feature groups. If false fills missing columns with nulls. :type drop_non_intersecting_columns: bool .. py:attribute:: feature_group_ids :type: List[str] :value: None .. py:attribute:: drop_non_intersecting_columns :type: bool :value: False .. py:method:: __post_init__() .. py:class:: _OperatorConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` A class to select and return the the correct type of Operator Config based on a serialized OperatorConfig instance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'operator_type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: TrainingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for the training config options used to train the model. .. py:attribute:: _upper_snake_case_keys :type: bool :value: True .. py:attribute:: _support_kwargs :type: bool :value: True .. py:attribute:: kwargs :type: dict .. py:attribute:: problem_type :type: abacusai.api_class.enums.ProblemType :value: None .. py:attribute:: algorithm :type: str :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: PersonalizationTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the PERSONALIZATION problem type :param objective: Ranking scheme used to select final best model. :type objective: PersonalizationObjective :param sort_objective: Ranking scheme used to sort models on the metrics page. :type sort_objective: PersonalizationObjective :param training_mode: whether to train in production or experimental mode. Defaults to EXP. :type training_mode: PersonalizationTrainingMode :param target_action_types: List of action types to use as targets for training. :type target_action_types: List[str] :param target_action_weights: Dictionary of action types to weights for training. :type target_action_weights: Dict[str, float] :param session_event_types: List of event types to treat as occurrences of sessions. :type session_event_types: List[str] :param test_split: Percent of dataset to use for test data. We support using a range between 6% to 20% of your dataset to use as test data. :type test_split: int :param recent_days_for_training: Limit training data to a certain latest number of days. :type recent_days_for_training: int :param training_start_date: Only consider training interaction data after this date. Specified in the timezone of the dataset. :type training_start_date: str :param test_on_user_split: Use user splits instead of using time splits, when validating and testing the model. :type test_on_user_split: bool :param test_split_on_last_k_items: Use last k items instead of global timestamp splits, when validating and testing the model. :type test_split_on_last_k_items: bool :param test_last_items_length: Number of items to leave out for each user when using leave k out folds. :type test_last_items_length: int :param test_window_length_hours: Duration (in hours) of most recent time window to use when validating and testing the model. :type test_window_length_hours: int :param explicit_time_split: Sets an explicit time-based test boundary. :type explicit_time_split: bool :param test_row_indicator: Column indicating which rows to use for training (TRAIN), validation (VAL) and testing (TEST). :type test_row_indicator: str :param full_data_retraining: Train models separately with all the data. :type full_data_retraining: bool :param sequential_training: Train a mode sequentially through time. :type sequential_training: bool :param data_split_feature_group_table_name: Specify the table name of the feature group to export training data with the fold column. :type data_split_feature_group_table_name: str :param optimized_event_type: The final event type to optimize for and compute metrics on. :type optimized_event_type: str :param dropout_rate: Dropout rate for neural network. :type dropout_rate: int :param batch_size: Batch size for neural network. :type batch_size: BatchSize :param disable_transformer: Disable training the transformer algorithm. :type disable_transformer: bool :param disable_gpu: Disable training on GPU. :type disable_gpu: boo :param filter_history: Do not recommend items the user has already interacted with. :type filter_history: bool :param action_types_exclusion_days: Mapping from action type to number of days for which we exclude previously interacted items from prediction :type action_types_exclusion_days: Dict[str, float] :param session_dedupe_mins: Minimum number of minutes between two sessions for a user. :type session_dedupe_mins: float :param max_history_length: Maximum length of user-item history to include user in training examples. :type max_history_length: int :param compute_rerank_metrics: Compute metrics based on rerank results. :type compute_rerank_metrics: bool :param add_time_features: Include interaction time as a feature. :type add_time_features: bool :param disable_timestamp_scalar_features: Exclude timestamp scalar features. :type disable_timestamp_scalar_features: bool :param compute_session_metrics: Evaluate models based on how well they are able to predict the next session of interactions. :type compute_session_metrics: bool :param max_user_history_len_percentile: Filter out users with history length above this percentile. :type max_user_history_len_percentile: int :param downsample_item_popularity_percentile: Downsample items more popular than this percentile. :type downsample_item_popularity_percentile: float :param use_user_id_feature: Use user id as a feature in CTR models. :type use_user_id_feature: bool :param min_item_history: Minimum number of interactions an item must have to be included in training. :type min_item_history: int :param query_column: Name of column in the interactions table that represents a natural language query, e.g. 'blue t-shirt'. :type query_column: str :param item_query_column: Name of column in the item catalog that will be matched to the query column in the interactions table. :type item_query_column: str :param include_item_id_feature: Add Item-Id to the input features of the model. Applicable for Embedding distance and CTR models. :type include_item_id_feature: bool .. py:attribute:: objective :type: abacusai.api_class.enums.PersonalizationObjective :value: None .. py:attribute:: sort_objective :type: abacusai.api_class.enums.PersonalizationObjective :value: None .. py:attribute:: training_mode :type: abacusai.api_class.enums.PersonalizationTrainingMode :value: None .. py:attribute:: target_action_types :type: List[str] :value: None .. py:attribute:: target_action_weights :type: Dict[str, float] :value: None .. py:attribute:: session_event_types :type: List[str] :value: None .. py:attribute:: test_split :type: int :value: None .. py:attribute:: recent_days_for_training :type: int :value: None .. py:attribute:: training_start_date :type: str :value: None .. py:attribute:: test_on_user_split :type: bool :value: None .. py:attribute:: test_split_on_last_k_items :type: bool :value: None .. py:attribute:: test_last_items_length :type: int :value: None .. py:attribute:: test_window_length_hours :type: int :value: None .. py:attribute:: explicit_time_split :type: bool :value: None .. py:attribute:: test_row_indicator :type: str :value: None .. py:attribute:: full_data_retraining :type: bool :value: None .. py:attribute:: sequential_training :type: bool :value: None .. py:attribute:: data_split_feature_group_table_name :type: str :value: None .. py:attribute:: optimized_event_type :type: str :value: None .. py:attribute:: dropout_rate :type: int :value: None .. py:attribute:: batch_size :type: abacusai.api_class.enums.BatchSize :value: None .. py:attribute:: disable_transformer :type: bool :value: None .. py:attribute:: disable_gpu :type: bool :value: None .. py:attribute:: filter_history :type: bool :value: None .. py:attribute:: action_types_exclusion_days :type: Dict[str, float] :value: None .. py:attribute:: max_history_length :type: int :value: None .. py:attribute:: compute_rerank_metrics :type: bool :value: None .. py:attribute:: add_time_features :type: bool :value: None .. py:attribute:: disable_timestamp_scalar_features :type: bool :value: None .. py:attribute:: compute_session_metrics :type: bool :value: None .. py:attribute:: query_column :type: str :value: None .. py:attribute:: item_query_column :type: str :value: None .. py:attribute:: use_user_id_feature :type: bool :value: None .. py:attribute:: session_dedupe_mins :type: float :value: None .. py:attribute:: include_item_id_feature :type: bool :value: None .. py:attribute:: max_user_history_len_percentile :type: int :value: None .. py:attribute:: downsample_item_popularity_percentile :type: float :value: None .. py:attribute:: min_item_history :type: int :value: None .. py:method:: __post_init__() .. py:class:: RegressionTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the PREDICTIVE_MODELING problem type :param objective: Ranking scheme used to select final best model. :type objective: RegressionObjective :param sort_objective: Ranking scheme used to sort models on the metrics page. :type sort_objective: RegressionObjective :param tree_hpo_mode: (RegressionTreeHPOMode): Turning off Rapid Experimentation will take longer to train. :param type_of_split: Type of data splitting into train/test (validation also). :type type_of_split: RegressionTypeOfSplit :param test_split: Percent of dataset to use for test data. We support using a range between 5% to 20% of your dataset to use as test data. :type test_split: int :param disable_test_val_fold: Do not create a TEST_VAL set. All records which would be part of the TEST_VAL fold otherwise, remain in the TEST fold. :type disable_test_val_fold: bool :param k_fold_cross_validation: Use this to force k-fold cross validation bagging on or off. :type k_fold_cross_validation: bool :param num_cv_folds: Specify the value of k in k-fold cross validation. :type num_cv_folds: int :param timestamp_based_splitting_column: Timestamp column selected for splitting into test and train. :type timestamp_based_splitting_column: str :param timestamp_based_splitting_method: Method of selecting TEST set, top percentile wise or after a given timestamp. :type timestamp_based_splitting_method: RegressionTimeSplitMethod :param test_splitting_timestamp: Rows with timestamp greater than this will be considered to be in the test set. :type test_splitting_timestamp: str :param sampling_unit_keys: Constrain train/test separation to partition a column. :type sampling_unit_keys: List[str] :param test_row_indicator: Column indicating which rows to use for training (TRAIN) and testing (TEST). Validation (VAL) can also be specified. :type test_row_indicator: str :param full_data_retraining: Train models separately with all the data. :type full_data_retraining: bool :param rebalance_classes: Class weights are computed as the inverse of the class frequency from the training dataset when this option is selected as "Yes". It is useful when the classes in the dataset are unbalanced. Re-balancing classes generally boosts recall at the cost of precision on rare classes. :type rebalance_classes: bool :param rare_class_augmentation_threshold: Augments any rare class whose relative frequency with respect to the most frequent class is less than this threshold. Default = 0.1 for classification problems with rare classes. :type rare_class_augmentation_threshold: float :param augmentation_strategy: Strategy to deal with class imbalance and data augmentation. :type augmentation_strategy: RegressionAugmentationStrategy :param training_rows_downsample_ratio: Uses this ratio to train on a sample of the dataset provided. :type training_rows_downsample_ratio: float :param active_labels_column: Specify a column to use as the active columns in a multi label setting. :type active_labels_column: str :param min_categorical_count: Minimum threshold to consider a value different from the unknown placeholder. :type min_categorical_count: int :param sample_weight: Specify a column to use as the weight of a sample for training and eval. :type sample_weight: str :param numeric_clipping_percentile: Uses this option to clip the top and bottom x percentile of numeric feature columns where x is the value of this option. :type numeric_clipping_percentile: float :param target_transform: Specify a transform (e.g. log, quantile) to apply to the target variable. :type target_transform: RegressionTargetTransform :param ignore_datetime_features: Remove all datetime features from the model. Useful while generalizing to different time periods. :type ignore_datetime_features: bool :param max_text_words: Maximum number of words to use from text fields. :type max_text_words: int :param perform_feature_selection: If enabled, additional algorithms which support feature selection as a pretraining step will be trained separately with the selected subset of features. The details about their selected features can be found in their respective logs. :type perform_feature_selection: bool :param feature_selection_intensity: This determines the strictness with which features will be filtered out. 1 being very lenient (more features kept), 100 being very strict. :type feature_selection_intensity: int :param batch_size: Batch size. :type batch_size: BatchSize :param dropout_rate: Dropout percentage rate. :type dropout_rate: int :param pretrained_model_name: Enable algorithms which process text using pretrained multilingual NLP models. :type pretrained_model_name: str :param pretrained_llm_name: Enable algorithms which process text using pretrained large language models. :type pretrained_llm_name: str :param is_multilingual: Enable algorithms which process text using pretrained multilingual NLP models. :type is_multilingual: bool :param loss_function: Loss function to be used as objective for model training. :type loss_function: RegressionLossFunction :param loss_parameters: Loss function params in format =;=;..... :type loss_parameters: str :param target_encode_categoricals: Use this to turn target encoding on categorical features on or off. :type target_encode_categoricals: bool :param drop_original_categoricals: This option helps us choose whether to also feed the original label encoded categorical columns to the mdoels along with their target encoded versions. :type drop_original_categoricals: bool :param monotonically_increasing_features: Constrain the model such that it behaves as if the target feature is monotonically increasing with the selected features :type monotonically_increasing_features: List[str] :param monotonically_decreasing_features: Constrain the model such that it behaves as if the target feature is monotonically decreasing with the selected features :type monotonically_decreasing_features: List[str] :param data_split_feature_group_table_name: Specify the table name of the feature group to export training data with the fold column. :type data_split_feature_group_table_name: str :param custom_loss_functions: Registered custom losses available for selection. :type custom_loss_functions: List[str] :param custom_metrics: Registered custom metrics available for selection. :type custom_metrics: List[str] :param partial_dependence_analysis: Specify whether to run partial dependence plots for all features or only some features. :type partial_dependence_analysis: PartialDependenceAnalysis :param do_masked_language_model_pretraining: Specify whether to run a masked language model unsupervised pretraining step before supervized training in certain supported algorithms which use BERT-like backbones. :type do_masked_language_model_pretraining: bool :param max_tokens_in_sentence: Specify the max tokens to be kept in a sentence based on the truncation strategy. :type max_tokens_in_sentence: int :param truncation_strategy: What strategy to use to deal with text rows with more than a given number of tokens (if num of tokens is more than "max_tokens_in_sentence"). :type truncation_strategy: str .. py:attribute:: objective :type: abacusai.api_class.enums.RegressionObjective :value: None .. py:attribute:: sort_objective :type: abacusai.api_class.enums.RegressionObjective :value: None .. py:attribute:: tree_hpo_mode :type: abacusai.api_class.enums.RegressionTreeHPOMode :value: None .. py:attribute:: partial_dependence_analysis :type: abacusai.api_class.enums.PartialDependenceAnalysis :value: None .. py:attribute:: type_of_split :type: abacusai.api_class.enums.RegressionTypeOfSplit :value: None .. py:attribute:: test_split :type: int :value: None .. py:attribute:: disable_test_val_fold :type: bool :value: None .. py:attribute:: k_fold_cross_validation :type: bool :value: None .. py:attribute:: num_cv_folds :type: int :value: None .. py:attribute:: timestamp_based_splitting_column :type: str :value: None .. py:attribute:: timestamp_based_splitting_method :type: abacusai.api_class.enums.RegressionTimeSplitMethod :value: None .. py:attribute:: test_splitting_timestamp :type: str :value: None .. py:attribute:: sampling_unit_keys :type: List[str] :value: None .. py:attribute:: test_row_indicator :type: str :value: None .. py:attribute:: full_data_retraining :type: bool :value: None .. py:attribute:: rebalance_classes :type: bool :value: None .. py:attribute:: rare_class_augmentation_threshold :type: float :value: None .. py:attribute:: augmentation_strategy :type: abacusai.api_class.enums.RegressionAugmentationStrategy :value: None .. py:attribute:: training_rows_downsample_ratio :type: float :value: None .. py:attribute:: active_labels_column :type: str :value: None .. py:attribute:: min_categorical_count :type: int :value: None .. py:attribute:: sample_weight :type: str :value: None .. py:attribute:: numeric_clipping_percentile :type: float :value: None .. py:attribute:: target_transform :type: abacusai.api_class.enums.RegressionTargetTransform :value: None .. py:attribute:: ignore_datetime_features :type: bool :value: None .. py:attribute:: max_text_words :type: int :value: None .. py:attribute:: perform_feature_selection :type: bool :value: None .. py:attribute:: feature_selection_intensity :type: int :value: None .. py:attribute:: batch_size :type: abacusai.api_class.enums.BatchSize :value: None .. py:attribute:: dropout_rate :type: int :value: None .. py:attribute:: pretrained_model_name :type: str :value: None .. py:attribute:: pretrained_llm_name :type: str :value: None .. py:attribute:: is_multilingual :type: bool :value: None .. py:attribute:: do_masked_language_model_pretraining :type: bool :value: None .. py:attribute:: max_tokens_in_sentence :type: int :value: None .. py:attribute:: truncation_strategy :type: str :value: None .. py:attribute:: loss_function :type: abacusai.api_class.enums.RegressionLossFunction :value: None .. py:attribute:: loss_parameters :type: str :value: None .. py:attribute:: target_encode_categoricals :type: bool :value: None .. py:attribute:: drop_original_categoricals :type: bool :value: None .. py:attribute:: monotonically_increasing_features :type: List[str] :value: None .. py:attribute:: monotonically_decreasing_features :type: List[str] :value: None .. py:attribute:: data_split_feature_group_table_name :type: str :value: None .. py:attribute:: custom_loss_functions :type: List[str] :value: None .. py:attribute:: custom_metrics :type: List[str] :value: None .. py:method:: __post_init__() .. py:class:: ForecastingTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the FORECASTING problem type :param prediction_length: How many timesteps in the future to predict. :type prediction_length: int :param objective: Ranking scheme used to select final best model. :type objective: ForecastingObjective :param sort_objective: Ranking scheme used to sort models on the metrics page. :type sort_objective: ForecastingObjective :param forecast_frequency: Forecast frequency. :type forecast_frequency: ForecastingFrequency :param probability_quantiles: Prediction quantiles. :type probability_quantiles: List[float] :param force_prediction_length: Force length of test window to be the same as prediction length. :type force_prediction_length: int :param filter_items: Filter items with small history and volume. :type filter_items: bool :param enable_feature_selection: Enable feature selection. :type enable_feature_selection: bool :param enable_padding: Pad series to the max_date of the dataset :type enable_padding: bool :param enable_cold_start: Enable cold start forecasting by training/predicting for zero history items. :type enable_cold_start: bool :param enable_multiple_backtests: Whether to enable multiple backtesting or not. :type enable_multiple_backtests: bool :param num_backtesting_windows: Total backtesting windows to use for the training. :type num_backtesting_windows: int :param backtesting_window_step_size: Use this step size to shift backtesting windows for model training. :type backtesting_window_step_size: int :param full_data_retraining: Train models separately with all the data. :type full_data_retraining: bool :param additional_forecast_keys: List[str]: List of categoricals in timeseries that can act as multi-identifier. :param experimentation_mode: Selecting Thorough Experimentation will take longer to train. :type experimentation_mode: ExperimentationMode :param type_of_split: Type of data splitting into train/test. :type type_of_split: ForecastingDataSplitType :param test_by_item: Partition train/test data by item rather than time if true. :type test_by_item: bool :param test_start: Limit training data to dates before the given test start. :type test_start: str :param test_split: Percent of dataset to use for test data. We support using a range between 5% to 20% of your dataset to use as test data. :type test_split: int :param loss_function: Loss function for training neural network. :type loss_function: ForecastingLossFunction :param underprediction_weight: Weight for underpredictions :type underprediction_weight: float :param disable_networks_without_analytic_quantiles: Disable neural networks, which quantile functions do not have analytic expressions (e.g, mixture models) :type disable_networks_without_analytic_quantiles: bool :param initial_learning_rate: Initial learning rate. :type initial_learning_rate: float :param l2_regularization_factor: L2 regularization factor. :type l2_regularization_factor: float :param dropout_rate: Dropout percentage rate. :type dropout_rate: int :param recurrent_layers: Number of recurrent layers to stack in network. :type recurrent_layers: int :param recurrent_units: Number of units in each recurrent layer. :type recurrent_units: int :param convolutional_layers: Number of convolutional layers to stack on top of recurrent layers in network. :type convolutional_layers: int :param convolution_filters: Number of filters in each convolution. :type convolution_filters: int :param local_scaling_mode: Options to make NN inputs stationary in high dynamic range datasets. :type local_scaling_mode: ForecastingLocalScaling :param zero_predictor: Include subnetwork to classify points where target equals zero. :type zero_predictor: bool :param skip_missing: Make the RNN ignore missing entries rather instead of processing them. :type skip_missing: bool :param batch_size: Batch size. :type batch_size: ForecastingBatchSize :param batch_renormalization: Enable batch renormalization between layers. :type batch_renormalization: bool :param history_length: While training, how much history to consider. :type history_length: int :param prediction_step_size: Number of future periods to include in objective for each training sample. :type prediction_step_size: int :param training_point_overlap: Amount of overlap to allow between training samples. :type training_point_overlap: float :param max_scale_context: Maximum context to use for local scaling. :type max_scale_context: int :param quantiles_extension_method: Quantile extension method :type quantiles_extension_method: ForecastingQuanitlesExtensionMethod :param number_of_samples: Number of samples for ancestral simulation :type number_of_samples: int :param symmetrize_quantiles: Force symmetric quantiles (like in Gaussian distribution) :type symmetrize_quantiles: bool :param use_log_transforms: Apply logarithmic transformations to input data. :type use_log_transforms: bool :param smooth_history: Smooth (low pass filter) the timeseries. :type smooth_history: float :param local_scale_target: Using per training/prediction window target scaling. :type local_scale_target: bool :param use_clipping: Apply clipping to input data to stabilize the training. :type use_clipping: bool :param timeseries_weight_column: If set, we use the values in this column from timeseries data to assign time dependent item weights during training and evaluation. :type timeseries_weight_column: str :param item_attributes_weight_column: If set, we use the values in this column from item attributes data to assign weights to items during training and evaluation. :type item_attributes_weight_column: str :param use_timeseries_weights_in_objective: If True, we include weights from column set as "TIMESERIES WEIGHT COLUMN" in objective functions. :type use_timeseries_weights_in_objective: bool :param use_item_weights_in_objective: If True, we include weights from column set as "ITEM ATTRIBUTES WEIGHT COLUMN" in objective functions. :type use_item_weights_in_objective: bool :param skip_timeseries_weight_scaling: If True, we will avoid normalizing the weights. :type skip_timeseries_weight_scaling: bool :param timeseries_loss_weight_column: Use value in this column to weight the loss while training. :type timeseries_loss_weight_column: str :param use_item_id: Include a feature to indicate the item being forecast. :type use_item_id: bool :param use_all_item_totals: Include as input total target across items. :type use_all_item_totals: bool :param handle_zeros_as_missing_values: If True, handle zero values in demand as missing data. :type handle_zeros_as_missing_values: bool :param datetime_holiday_calendars: Holiday calendars to augment training with. :type datetime_holiday_calendars: List[HolidayCalendars] :param fill_missing_values: Strategy for filling in missing values. :type fill_missing_values: List[List[dict]] :param enable_clustering: Enable clustering in forecasting. :type enable_clustering: bool :param data_split_feature_group_table_name: Specify the table name of the feature group to export training data with the fold column. :type data_split_feature_group_table_name: str :param custom_loss_functions: Registered custom losses available for selection. :type custom_loss_functions: List[str] :param custom_metrics: Registered custom metrics available for selection. :type custom_metrics: List[str] :param return_fractional_forecasts: Use this to return fractional forecast values while prediction :param allow_training_with_small_history: Allows training with fewer than 100 rows in the dataset .. py:attribute:: prediction_length :type: int :value: None .. py:attribute:: objective :type: abacusai.api_class.enums.ForecastingObjective :value: None .. py:attribute:: sort_objective :type: abacusai.api_class.enums.ForecastingObjective :value: None .. py:attribute:: forecast_frequency :type: abacusai.api_class.enums.ForecastingFrequency :value: None .. py:attribute:: probability_quantiles :type: List[float] :value: None .. py:attribute:: force_prediction_length :type: bool :value: None .. py:attribute:: filter_items :type: bool :value: None .. py:attribute:: enable_feature_selection :type: bool :value: None .. py:attribute:: enable_padding :type: bool :value: None .. py:attribute:: enable_cold_start :type: bool :value: None .. py:attribute:: enable_multiple_backtests :type: bool :value: None .. py:attribute:: num_backtesting_windows :type: int :value: None .. py:attribute:: backtesting_window_step_size :type: int :value: None .. py:attribute:: full_data_retraining :type: bool :value: None .. py:attribute:: additional_forecast_keys :type: List[str] :value: None .. py:attribute:: experimentation_mode :type: abacusai.api_class.enums.ExperimentationMode :value: None .. py:attribute:: type_of_split :type: abacusai.api_class.enums.ForecastingDataSplitType :value: None .. py:attribute:: test_by_item :type: bool :value: None .. py:attribute:: test_start :type: str :value: None .. py:attribute:: test_split :type: int :value: None .. py:attribute:: loss_function :type: abacusai.api_class.enums.ForecastingLossFunction :value: None .. py:attribute:: underprediction_weight :type: float :value: None .. py:attribute:: disable_networks_without_analytic_quantiles :type: bool :value: None .. py:attribute:: initial_learning_rate :type: float :value: None .. py:attribute:: l2_regularization_factor :type: float :value: None .. py:attribute:: dropout_rate :type: int :value: None .. py:attribute:: recurrent_layers :type: int :value: None .. py:attribute:: recurrent_units :type: int :value: None .. py:attribute:: convolutional_layers :type: int :value: None .. py:attribute:: convolution_filters :type: int :value: None .. py:attribute:: local_scaling_mode :type: abacusai.api_class.enums.ForecastingLocalScaling :value: None .. py:attribute:: zero_predictor :type: bool :value: None .. py:attribute:: skip_missing :type: bool :value: None .. py:attribute:: batch_size :type: abacusai.api_class.enums.BatchSize :value: None .. py:attribute:: batch_renormalization :type: bool :value: None .. py:attribute:: history_length :type: int :value: None .. py:attribute:: prediction_step_size :type: int :value: None .. py:attribute:: training_point_overlap :type: float :value: None .. py:attribute:: max_scale_context :type: int :value: None .. py:attribute:: quantiles_extension_method :type: abacusai.api_class.enums.ForecastingQuanitlesExtensionMethod :value: None .. py:attribute:: number_of_samples :type: int :value: None .. py:attribute:: symmetrize_quantiles :type: bool :value: None .. py:attribute:: use_log_transforms :type: bool :value: None .. py:attribute:: smooth_history :type: float :value: None .. py:attribute:: local_scale_target :type: bool :value: None .. py:attribute:: use_clipping :type: bool :value: None .. py:attribute:: timeseries_weight_column :type: str :value: None .. py:attribute:: item_attributes_weight_column :type: str :value: None .. py:attribute:: use_timeseries_weights_in_objective :type: bool :value: None .. py:attribute:: use_item_weights_in_objective :type: bool :value: None .. py:attribute:: skip_timeseries_weight_scaling :type: bool :value: None .. py:attribute:: timeseries_loss_weight_column :type: str :value: None .. py:attribute:: use_item_id :type: bool :value: None .. py:attribute:: use_all_item_totals :type: bool :value: None .. py:attribute:: handle_zeros_as_missing_values :type: bool :value: None .. py:attribute:: datetime_holiday_calendars :type: List[abacusai.api_class.enums.HolidayCalendars] :value: None .. py:attribute:: fill_missing_values :type: List[List[dict]] :value: None .. py:attribute:: enable_clustering :type: bool :value: None .. py:attribute:: data_split_feature_group_table_name :type: str :value: None .. py:attribute:: custom_loss_functions :type: List[str] :value: None .. py:attribute:: custom_metrics :type: List[str] :value: None .. py:attribute:: return_fractional_forecasts :type: bool :value: None .. py:attribute:: allow_training_with_small_history :type: bool :value: None .. py:method:: __post_init__() .. py:class:: NamedEntityExtractionTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the NAMED_ENTITY_EXTRACTION problem type :param llm_for_ner: LLM to use for NER from among available LLM :type llm_for_ner: NERForLLM :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int :param test_row_indicator: Column indicating which rows to use for training (TRAIN) and testing (TEST). :type test_row_indicator: str :param active_labels_column: Entities that have been marked in a particular text :type active_labels_column: str :param document_format: Format of the input documents. :type document_format: NLPDocumentFormat :param minimum_bounding_box_overlap_ratio: Tokens are considered to belong to annotation if the user bounding box is provided and ratio of (token_bounding_box ∩ annotation_bounding_box) / token_bounding_area is greater than the provided value. :type minimum_bounding_box_overlap_ratio: float :param save_predicted_pdf: Whether to save predicted PDF documents :type save_predicted_pdf: bool :param enhanced_ocr: Enhanced text extraction from predicted digital documents :type enhanced_ocr: bool :param additional_extraction_instructions: Additional instructions to guide the LLM in extracting the entities. Only used with LLM algorithms. :type additional_extraction_instructions: str .. py:attribute:: llm_for_ner :type: abacusai.api_class.enums.LLMName :value: None .. py:attribute:: test_split :type: int :value: None .. py:attribute:: test_row_indicator :type: str :value: None .. py:attribute:: active_labels_column :type: str :value: None .. py:attribute:: document_format :type: abacusai.api_class.enums.NLPDocumentFormat :value: None .. py:attribute:: minimum_bounding_box_overlap_ratio :type: float :value: 0.0 .. py:attribute:: save_predicted_pdf :type: bool :value: True .. py:attribute:: enhanced_ocr :type: bool :value: False .. py:attribute:: additional_extraction_instructions :type: str :value: None .. py:method:: __post_init__() .. py:class:: NaturalLanguageSearchTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the NATURAL_LANGUAGE_SEARCH problem type :param abacus_internal_model: Use a Abacus.AI LLM to answer questions about your data without using any external APIs :type abacus_internal_model: bool :param num_completion_tokens: Default for maximum number of tokens for chat answers. Reducing this will get faster responses which are more succinct :type num_completion_tokens: int :param larger_embeddings: Use a higher dimension embedding model. :type larger_embeddings: bool :param search_chunk_size: Chunk size for indexing the documents. :type search_chunk_size: int :param chunk_overlap_fraction: Overlap in chunks while indexing the documents. :type chunk_overlap_fraction: float :param index_fraction: Fraction of the chunk to use for indexing. :type index_fraction: float .. py:attribute:: abacus_internal_model :type: bool :value: None .. py:attribute:: num_completion_tokens :type: int :value: None .. py:attribute:: larger_embeddings :type: bool :value: None .. py:attribute:: search_chunk_size :type: int :value: None .. py:attribute:: index_fraction :type: float :value: None .. py:attribute:: chunk_overlap_fraction :type: float :value: None .. py:method:: __post_init__() .. py:class:: ChatLLMTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the CHAT_LLM problem type :param document_retrievers: List of names or IDs of document retrievers to use as vector stores of information for RAG responses. :type document_retrievers: List[str] :param num_completion_tokens: Default for maximum number of tokens for chat answers. Reducing this will get faster responses which are more succinct. :type num_completion_tokens: int :param temperature: The generative LLM temperature. :type temperature: float :param retrieval_columns: Include the metadata column values in the retrieved search results. :type retrieval_columns: list :param filter_columns: Allow users to filter the document retrievers on these metadata columns. :type filter_columns: list :param include_general_knowledge: Allow the LLM to rely not just on RAG search results, but to fall back on general knowledge. Disabled by default. :type include_general_knowledge: bool :param enable_web_search: Allow the LLM to use Web Search Engines to retrieve information for better results. :type enable_web_search: bool :param behavior_instructions: Customize the overall behaviour of the model. This controls things like - when to execute code (if enabled), write sql query, search web (if enabled), etc. :type behavior_instructions: str :param response_instructions: Customized instructions for how the model should respond inlcuding the format, persona and tone of the answers. :type response_instructions: str :param enable_llm_rewrite: If enabled, an LLM will rewrite the RAG queries sent to document retriever. Disabled by default. :type enable_llm_rewrite: bool :param column_filtering_instructions: Instructions for a LLM call to automatically generate filter expressions on document metadata to retrieve relevant documents for the conversation. :type column_filtering_instructions: str :param keyword_requirement_instructions: Instructions for a LLM call to automatically generate keyword requirements to retrieve relevant documents for the conversation. :type keyword_requirement_instructions: str :param query_rewrite_instructions: Special instructions for the LLM which rewrites the RAG query. :type query_rewrite_instructions: str :param max_search_results: Maximum number of search results in the retrieval augmentation step. If we know that the questions are likely to have snippets which are easily matched in the documents, then a lower number will help with accuracy. :type max_search_results: int :param data_feature_group_ids: (List[str]): List of feature group IDs to use to possibly query for the ChatLLM. The created ChatLLM is commonly referred to as DataLLM. :param data_prompt_context: Prompt context for the data feature group IDs. :type data_prompt_context: str :param data_prompt_table_context: Dict of table name and table context pairs to provide table wise context for each structured data table. :type data_prompt_table_context: Dict[str, str] :param data_prompt_column_context: Dict of 'table_name.column_name' and 'column_context' pairs to provide column context for some selected columns in the selected structured data table. This replaces the default auto-generated information about the column data. :type data_prompt_column_context: Dict[str, str] :param hide_sql_and_code: When running data queries, this will hide the generated SQL and Code in the response. :type hide_sql_and_code: bool :param disable_data_summarization: After executing a query summarize the reponse and reply back with only the table and query run. :type disable_data_summarization: bool :param data_columns_to_ignore: Columns to ignore while encoding information about structured data tables in context for the LLM. A list of strings of format "." :type data_columns_to_ignore: List[str] :param search_score_cutoff: Minimum search score to consider a document as a valid search result. :type search_score_cutoff: float :param include_bm25_retrieval: Combine BM25 search score with vector search using reciprocal rank fusion. :type include_bm25_retrieval: bool :param database_connector_id: Database connector ID to use for connecting external database that gives access to structured data to the LLM. :type database_connector_id: str :param database_connector_tables: List of tables to use from the database connector for the ChatLLM. :type database_connector_tables: List[str] :param enable_code_execution: Enable python code execution in the ChatLLM. This equips the LLM with a python kernel in which all its code is executed. :type enable_code_execution: bool :param enable_response_caching: Enable caching of LLM responses to speed up response times and improve reproducibility. :type enable_response_caching: bool :param unknown_answer_phrase: Fallback response when the LLM can't find an answer. :type unknown_answer_phrase: str :param enable_tool_bar: Enable the tool bar in Enterprise ChatLLM to provide additional functionalities like tool_use, web_search, image_gen, etc. :type enable_tool_bar: bool :param enable_inline_source_citations: Enable inline citations of the sources in the response. :type enable_inline_source_citations: bool :param response_format: (str): When set to 'JSON', the LLM will generate a JSON formatted string. :param json_response_instructions: Instructions to be followed while generating the json_response if `response_format` is set to "JSON". This can include the schema information if the schema is dynamic and its keys cannot be pre-determined. :type json_response_instructions: str :param json_response_schema: Specifies the JSON schema that the model should adhere to if `response_format` is set to "JSON". This should be a json-formatted string where each field of the expected schema is mapped to a dictionary containing the fields 'type', 'required' and 'description'. For example - '{"sample_field": {"type": "integer", "required": true, "description": "Sample Field"}}' :type json_response_schema: str :param mask_pii: Mask PII in the prompts and uploaded documents before sending it to the LLM. :type mask_pii: bool :param custom_tools: List of custom tool names to be used in the chat. :type custom_tools: List[str] .. py:attribute:: document_retrievers :type: List[str] :value: None .. py:attribute:: num_completion_tokens :type: int :value: None .. py:attribute:: temperature :type: float :value: None .. py:attribute:: retrieval_columns :type: list :value: None .. py:attribute:: filter_columns :type: list :value: None .. py:attribute:: include_general_knowledge :type: bool :value: None .. py:attribute:: enable_web_search :type: bool :value: None .. py:attribute:: behavior_instructions :type: str :value: None .. py:attribute:: response_instructions :type: str :value: None .. py:attribute:: enable_llm_rewrite :type: bool :value: None .. py:attribute:: column_filtering_instructions :type: str :value: None .. py:attribute:: keyword_requirement_instructions :type: str :value: None .. py:attribute:: query_rewrite_instructions :type: str :value: None .. py:attribute:: max_search_results :type: int :value: None .. py:attribute:: data_feature_group_ids :type: List[str] :value: None .. py:attribute:: data_prompt_context :type: str :value: None .. py:attribute:: data_prompt_table_context :type: Dict[str, str] :value: None .. py:attribute:: data_prompt_column_context :type: Dict[str, str] :value: None .. py:attribute:: hide_sql_and_code :type: bool :value: None .. py:attribute:: disable_data_summarization :type: bool :value: None .. py:attribute:: data_columns_to_ignore :type: List[str] :value: None .. py:attribute:: search_score_cutoff :type: float :value: None .. py:attribute:: include_bm25_retrieval :type: bool :value: None .. py:attribute:: database_connector_id :type: str :value: None .. py:attribute:: database_connector_tables :type: List[str] :value: None .. py:attribute:: enable_code_execution :type: bool :value: None .. py:attribute:: metadata_columns :type: list :value: None .. py:attribute:: lookup_rewrite_instructions :type: str :value: None .. py:attribute:: enable_response_caching :type: bool :value: None .. py:attribute:: unknown_answer_phrase :type: str :value: None .. py:attribute:: enable_tool_bar :type: bool :value: None .. py:attribute:: enable_inline_source_citations :type: bool :value: None .. py:attribute:: response_format :type: str :value: None .. py:attribute:: json_response_instructions :type: str :value: None .. py:attribute:: json_response_schema :type: str :value: None .. py:attribute:: mask_pii :type: bool :value: None .. py:attribute:: custom_tools :type: List[str] :value: None .. py:method:: __post_init__() .. py:class:: SentenceBoundaryDetectionTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the SENTENCE_BOUNDARY_DETECTION problem type :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int :param dropout_rate: Dropout rate for neural network. :type dropout_rate: float :param batch_size: Batch size for neural network. :type batch_size: BatchSize .. py:attribute:: test_split :type: int :value: None .. py:attribute:: dropout_rate :type: float :value: None .. py:attribute:: batch_size :type: abacusai.api_class.enums.BatchSize :value: None .. py:method:: __post_init__() .. py:class:: SentimentDetectionTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the SENTIMENT_DETECTION problem type :param sentiment_type: Type of sentiment to detect. :type sentiment_type: SentimentType :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int .. py:attribute:: sentiment_type :type: abacusai.api_class.enums.SentimentType :value: None .. py:attribute:: test_split :type: int :value: None .. py:method:: __post_init__() .. py:class:: DocumentClassificationTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the DOCUMENT_CLASSIFICATION problem type :param zero_shot_hypotheses: Zero shot hypotheses. Example text: 'This text is about pricing'. :type zero_shot_hypotheses: List[str] :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int .. py:attribute:: zero_shot_hypotheses :type: List[str] :value: None .. py:attribute:: test_split :type: int :value: None .. py:method:: __post_init__() .. py:class:: DocumentSummarizationTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the DOCUMENT_SUMMARIZATION problem type :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int :param dropout_rate: Dropout rate for neural network. :type dropout_rate: float :param batch_size: Batch size for neural network. :type batch_size: BatchSize .. py:attribute:: test_split :type: int :value: None .. py:attribute:: dropout_rate :type: float :value: None .. py:attribute:: batch_size :type: abacusai.api_class.enums.BatchSize :value: None .. py:method:: __post_init__() .. py:class:: DocumentVisualizationTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the DOCUMENT_VISUALIZATION problem type :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int :param dropout_rate: Dropout rate for neural network. :type dropout_rate: float :param batch_size: Batch size for neural network. :type batch_size: BatchSize .. py:attribute:: test_split :type: int :value: None .. py:attribute:: dropout_rate :type: float :value: None .. py:attribute:: batch_size :type: abacusai.api_class.enums.BatchSize :value: None .. py:method:: __post_init__() .. py:class:: ClusteringTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the CLUSTERING problem type :param num_clusters_selection: Number of clusters. If None, will be selected automatically. :type num_clusters_selection: int .. py:attribute:: num_clusters_selection :type: int :value: None .. py:method:: __post_init__() .. py:class:: ClusteringTimeseriesTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the CLUSTERING_TIMESERIES problem type :param num_clusters_selection: Number of clusters. If None, will be selected automatically. :type num_clusters_selection: int :param imputation: Imputation method for missing values. :type imputation: ClusteringImputationMethod .. py:attribute:: num_clusters_selection :type: int :value: None .. py:attribute:: imputation :type: abacusai.api_class.enums.ClusteringImputationMethod :value: None .. py:method:: __post_init__() .. py:class:: EventAnomalyTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the EVENT_ANOMALY problem type :param anomaly_fraction: The fraction of the dataset to classify as anomalous, between 0 and 0.5 :type anomaly_fraction: float .. py:attribute:: anomaly_fraction :type: float :value: None .. py:method:: __post_init__() .. py:class:: TimeseriesAnomalyTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the TS_ANOMALY problem type :param type_of_split: Type of data splitting into train/test. :type type_of_split: TimeseriesAnomalyDataSplitType :param test_start: Limit training data to dates before the given test start. :type test_start: str :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int :param fill_missing_values: strategies to fill missing values and missing timestamps :type fill_missing_values: List[List[dict]] :param handle_zeros_as_missing_values: If True, handle zero values in numeric columns as missing data :type handle_zeros_as_missing_values: bool :param timeseries_frequency: set this to control frequency of filling missing values :type timeseries_frequency: str :param min_samples_in_normal_region: Adjust this to fine-tune the number of anomalies to be identified. :type min_samples_in_normal_region: int :param anomaly_type: select what kind of peaks to detect as anomalies :type anomaly_type: TimeseriesAnomalyTypeOfAnomaly :param hyperparameter_calculation_with_heuristics: Enable heuristic calculation to get hyperparameters for the model :type hyperparameter_calculation_with_heuristics: TimeseriesAnomalyUseHeuristic :param threshold_score: Threshold score for anomaly detection :type threshold_score: float :param additional_anomaly_ids: List of categorical columns that can act as multi-identifier :type additional_anomaly_ids: List[str] .. py:attribute:: type_of_split :type: abacusai.api_class.enums.TimeseriesAnomalyDataSplitType :value: None .. py:attribute:: test_start :type: str :value: None .. py:attribute:: test_split :type: int :value: None .. py:attribute:: fill_missing_values :type: List[List[dict]] :value: None .. py:attribute:: handle_zeros_as_missing_values :type: bool :value: None .. py:attribute:: timeseries_frequency :type: str :value: None .. py:attribute:: min_samples_in_normal_region :type: int :value: None .. py:attribute:: anomaly_type :type: abacusai.api_class.enums.TimeseriesAnomalyTypeOfAnomaly :value: None .. py:attribute:: hyperparameter_calculation_with_heuristics :type: abacusai.api_class.enums.TimeseriesAnomalyUseHeuristic :value: None .. py:attribute:: threshold_score :type: float :value: None .. py:attribute:: additional_anomaly_ids :type: List[str] :value: None .. py:method:: __post_init__() .. py:class:: CumulativeForecastingTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the CUMULATIVE_FORECASTING problem type :param test_split: Percent of dataset to use for test data. We support using a range between 5 ( i.e. 5% ) to 20 ( i.e. 20% ) of your dataset. :type test_split: int :param historical_frequency: Forecast frequency :type historical_frequency: str :param cumulative_prediction_lengths: List of Cumulative Prediction Frequencies. Each prediction length must be between 1 and 365. :type cumulative_prediction_lengths: List[int] :param skip_input_transform: Avoid doing numeric scaling transformations on the input. :type skip_input_transform: bool :param skip_target_transform: Avoid doing numeric scaling transformations on the target. :type skip_target_transform: bool :param predict_residuals: Predict residuals instead of totals at each prediction step. :type predict_residuals: bool .. py:attribute:: test_split :type: int :value: None .. py:attribute:: historical_frequency :type: str :value: None .. py:attribute:: cumulative_prediction_lengths :type: List[int] :value: None .. py:attribute:: skip_input_transform :type: bool :value: None .. py:attribute:: skip_target_transform :type: bool :value: None .. py:attribute:: predict_residuals :type: bool :value: None .. py:method:: __post_init__() .. py:class:: ThemeAnalysisTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the THEME ANALYSIS problem type .. py:method:: __post_init__() .. py:class:: AIAgentTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the AI_AGENT problem type :param description: Description of the agent function. :type description: str :param agent_interface: The interface that the agent will be deployed with. :type agent_interface: AgentInterface :param agent_connectors: (List[enums.ApplicationConnectorType]): The connectors needed for the agent to function. .. py:attribute:: description :type: str :value: None .. py:attribute:: agent_interface :type: abacusai.api_class.enums.AgentInterface :value: None .. py:attribute:: agent_connectors :type: List[abacusai.api_class.enums.ApplicationConnectorType] :value: None .. py:attribute:: enable_binary_input :type: bool :value: None .. py:attribute:: agent_input_schema :type: dict :value: None .. py:attribute:: agent_output_schema :type: dict :value: None .. py:method:: __post_init__() .. py:class:: CustomTrainedModelTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the CUSTOM_TRAINED_MODEL problem type :param max_catalog_size: Maximum expected catalog size. :type max_catalog_size: int :param max_dimension: Maximum expected dimension of the catalog. :type max_dimension: int :param index_output_path: Fully qualified cloud location (GCS, S3, etc) to export snapshots of the embedding to. :type index_output_path: str :param docker_image_uri: Docker image URI. :type docker_image_uri: str :param service_port: Service port. :type service_port: int :param streaming_embeddings: Flag to enable streaming embeddings. :type streaming_embeddings: bool .. py:attribute:: max_catalog_size :type: int :value: None .. py:attribute:: max_dimension :type: int :value: None .. py:attribute:: index_output_path :type: str :value: None .. py:attribute:: docker_image_uri :type: str :value: None .. py:attribute:: service_port :type: int :value: None .. py:attribute:: streaming_embeddings :type: bool :value: None .. py:method:: __post_init__() .. py:class:: CustomAlgorithmTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the CUSTOM_ALGORITHM problem type :param timeout_minutes: Timeout for the model training in minutes. :type timeout_minutes: int .. py:attribute:: timeout_minutes :type: int :value: None .. py:method:: __post_init__() .. py:class:: OptimizationTrainingConfig Bases: :py:obj:`TrainingConfig` Training config for the OPTIMIZATION problem type :param solve_time_limit: The maximum time in seconds to spend solving the problem. Accepts values between 0 and 86400. :type solve_time_limit: float :param optimality_gap_limit: The stopping optimality gap limit. Optimality gap is fractional difference between the best known solution and the best possible solution. Accepts values between 0 and 1. :type optimality_gap_limit: float :param include_all_partitions: Include all partitions in the model training. Default is False. :type include_all_partitions: bool :param include_specific_partitions: Include specific partitions in partitioned model training. Default is empty list. :type include_specific_partitions: List[str] .. py:attribute:: solve_time_limit :type: float :value: None .. py:attribute:: optimality_gap_limit :type: float :value: None .. py:attribute:: include_all_partitions :type: bool :value: None .. py:attribute:: include_specific_partitions :type: List[str] :value: None .. py:method:: __post_init__() .. py:class:: _TrainingConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'problem_type' .. py:attribute:: config_class_map .. py:class:: DeployableAlgorithm Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Algorithm that can be deployed to a model. :param algorithm: ID of the algorithm. :type algorithm: str :param name: Name of the algorithm. :type name: str :param only_offline_deployable: Whether the algorithm can only be deployed offline. :type only_offline_deployable: bool :param trained_model_types: List of trained model types. :type trained_model_types: List[dict] .. py:attribute:: algorithm :type: str :value: None .. py:attribute:: name :type: str :value: None .. py:attribute:: only_offline_deployable :type: bool :value: None .. py:attribute:: trained_model_types :type: List[dict] :value: None .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: StdDevThresholdType Bases: :py:obj:`ApiEnum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: ABSOLUTE :value: 'ABSOLUTE' .. py:attribute:: PERCENTILE :value: 'PERCENTILE' .. py:attribute:: STDDEV :value: 'STDDEV' .. py:class:: TimeWindowConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Time Window Configuration :param window_duration: The duration of the window. :type window_duration: int :param window_from_start: Whether the window should be from the start of the time series. :type window_from_start: bool .. py:attribute:: window_duration :type: int :value: None .. py:attribute:: window_from_start :type: bool :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: ForecastingMonitorConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Forecasting Monitor Configuration :param id_column: The name of the column that contains the unique identifier for the time series. :type id_column: str :param timestamp_column: The name of the column that contains the timestamp for the time series. :type timestamp_column: str :param target_column: The name of the column that contains the target value for the time series. :type target_column: str :param start_time: The start time of the time series data. :type start_time: str :param end_time: The end time of the time series data. :type end_time: str :param window_config: The windowing configuration for the time series data. :type window_config: TimeWindowConfig .. py:attribute:: id_column :type: str :value: None .. py:attribute:: timestamp_column :type: str :value: None .. py:attribute:: target_column :type: str :value: None .. py:attribute:: start_time :type: str :value: None .. py:attribute:: end_time :type: str :value: None .. py:attribute:: window_config :type: TimeWindowConfig :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: StdDevThreshold Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Std Dev Threshold types :param threshold_type: Type of threshold to apply to the item attributes. :type threshold_type: StdDevThresholdType :param value: Value to use for the threshold. :type value: float .. py:attribute:: threshold_type :type: abacusai.api_class.enums.StdDevThresholdType :value: None .. py:attribute:: value :type: float :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: ItemAttributesStdDevThreshold Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Item Attributes Std Dev Threshold for Monitor Alerts :param lower_bound: Lower bound for the item attributes. :type lower_bound: StdDevThreshold :param upper_bound: Upper bound for the item attributes. :type upper_bound: StdDevThreshold .. py:attribute:: lower_bound :type: StdDevThreshold :value: None .. py:attribute:: upper_bound :type: StdDevThreshold :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: RestrictFeatureMappings Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Restrict Feature Mappings for Monitor Filtering :param feature_name: The name of the feature to restrict the monitor to. :type feature_name: str :param restricted_feature_values: The values of the feature to restrict the monitor to if feature is a categorical. :type restricted_feature_values: list :param start_time: The start time of the timestamp feature to filter from :type start_time: str :param end_time: The end time of the timestamp feature to filter until :type end_time: str :param min_value: Value to filter the numerical feature above :type min_value: float :param max_value: Filtering the numerical feature to below this value :type max_value: float .. py:attribute:: feature_name :type: str :value: None .. py:attribute:: restricted_feature_values :type: list :value: [] .. py:attribute:: start_time :type: str :value: None .. py:attribute:: end_time :type: str :value: None .. py:attribute:: min_value :type: float :value: None .. py:attribute:: max_value :type: float :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: MonitorFilteringConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Monitor Filtering Configuration :param start_time: The start time of the prediction time col :type start_time: str :param end_time: The end time of the prediction time col :type end_time: str :param restrict_feature_mappings: The feature mapping to restrict the monitor to. :type restrict_feature_mappings: RestrictFeatureMappings :param target_class: The target class to restrict the monitor to. :type target_class: str :param train_target_feature: Set the target feature for the training data. :type train_target_feature: str :param prediction_target_feature: Set the target feature for the prediction data. :type prediction_target_feature: str .. py:attribute:: start_time :type: str :value: None .. py:attribute:: end_time :type: str :value: None .. py:attribute:: restrict_feature_mappings :type: List[RestrictFeatureMappings] :value: None .. py:attribute:: target_class :type: str :value: None .. py:attribute:: train_target_feature :type: str :value: None .. py:attribute:: prediction_target_feature :type: str :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: AlertConditionConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for alert condition configs .. py:attribute:: alert_type :type: abacusai.api_class.enums.MonitorAlertType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: AccuracyBelowThresholdConditionConfig Bases: :py:obj:`AlertConditionConfig` Accuracy Below Threshold Condition Config for Monitor Alerts :param threshold: Threshold for when to consider a column to be in violation. The alert will only fire when the drift value is strictly greater than the threshold. :type threshold: float .. py:attribute:: threshold :type: float :value: None .. py:method:: __post_init__() .. py:class:: FeatureDriftConditionConfig Bases: :py:obj:`AlertConditionConfig` Feature Drift Condition Config for Monitor Alerts :param feature_drift_type: Feature drift type to apply the threshold on to determine whether a column has drifted significantly enough to be a violation. :type feature_drift_type: FeatureDriftType :param threshold: Threshold for when to consider a column to be in violation. The alert will only fire when the drift value is strictly greater than the threshold. :type threshold: float :param minimum_violations: Number of columns that must exceed the specified threshold to trigger an alert. :type minimum_violations: int :param feature_names: List of feature names to monitor for this alert. :type feature_names: List[str] .. py:attribute:: feature_drift_type :type: abacusai.api_class.enums.FeatureDriftType :value: None .. py:attribute:: threshold :type: float :value: None .. py:attribute:: minimum_violations :type: int :value: None .. py:attribute:: feature_names :type: List[str] :value: None .. py:method:: __post_init__() .. py:class:: TargetDriftConditionConfig Bases: :py:obj:`AlertConditionConfig` Target Drift Condition Config for Monitor Alerts :param feature_drift_type: Target drift type to apply the threshold on to determine whether a column has drifted significantly enough to be a violation. :type feature_drift_type: FeatureDriftType :param threshold: Threshold for when to consider the target column to be in violation. The alert will only fire when the drift value is strictly greater than the threshold. :type threshold: float .. py:attribute:: feature_drift_type :type: abacusai.api_class.enums.FeatureDriftType :value: None .. py:attribute:: threshold :type: float :value: None .. py:method:: __post_init__() .. py:class:: HistoryLengthDriftConditionConfig Bases: :py:obj:`AlertConditionConfig` History Length Drift Condition Config for Monitor Alerts :param feature_drift_type: History length drift type to apply the threshold on to determine whether the history length has drifted significantly enough to be a violation. :type feature_drift_type: FeatureDriftType :param threshold: Threshold for when to consider the history length to be in violation. The alert will only fire when the drift value is strictly greater than the threshold. :type threshold: float .. py:attribute:: feature_drift_type :type: abacusai.api_class.enums.FeatureDriftType :value: None .. py:attribute:: threshold :type: float :value: None .. py:method:: __post_init__() .. py:class:: DataIntegrityViolationConditionConfig Bases: :py:obj:`AlertConditionConfig` Data Integrity Violation Condition Config for Monitor Alerts :param data_integrity_type: This option selects the data integrity violations to monitor for this alert. :type data_integrity_type: DataIntegrityViolationType :param minimum_violations: Number of columns that must exceed the specified threshold to trigger an alert. :type minimum_violations: int .. py:attribute:: data_integrity_type :type: abacusai.api_class.enums.DataIntegrityViolationType :value: None .. py:attribute:: minimum_violations :type: int :value: None .. py:method:: __post_init__() .. py:class:: BiasViolationConditionConfig Bases: :py:obj:`AlertConditionConfig` Bias Violation Condition Config for Monitor Alerts :param bias_type: This option selects the bias metric to monitor for this alert. :type bias_type: BiasType :param threshold: Threshold for when to consider a column to be in violation. The alert will only fire when the drift value is strictly greater than the threshold. :type threshold: float :param minimum_violations: Number of columns that must exceed the specified threshold to trigger an alert. :type minimum_violations: int .. py:attribute:: bias_type :type: abacusai.api_class.enums.BiasType :value: None .. py:attribute:: threshold :type: float :value: None .. py:attribute:: minimum_violations :type: int :value: None .. py:method:: __post_init__() .. py:class:: PredictionCountConditionConfig Bases: :py:obj:`AlertConditionConfig` Deployment Prediction Condition Config for Deployment Alerts. By default we monitor if predictions made over a time window has reduced significantly. :param threshold: Threshold for when to consider to be a violation. Negative means alert on reduction, positive means alert on increase. :type threshold: float :param aggregation_window: Time window to aggregate the predictions over, e.g. 1h, 10m. Only h(hour), m(minute) and s(second) are supported. :type aggregation_window: str :param aggregation_type: Aggregation type to use for the aggregation window, e.g. sum, avg. :type aggregation_type: str .. py:attribute:: threshold :type: float :value: None .. py:attribute:: aggregation_window :type: str :value: None .. py:attribute:: aggregation_type :type: str :value: None .. py:method:: __post_init__() .. py:class:: _AlertConditionConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'alert_type' .. py:attribute:: config_class_key_value_camel_case :value: True .. py:attribute:: config_class_map .. py:class:: AlertActionConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for alert action configs .. py:attribute:: action_type :type: abacusai.api_class.enums.AlertActionType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: EmailActionConfig Bases: :py:obj:`AlertActionConfig` Email Action Config for Monitor Alerts :param email_recipients: List of email addresses to send the alert to. :type email_recipients: List[str] :param email_body: Body of the email to send. :type email_body: str .. py:attribute:: email_recipients :type: List[str] :value: None .. py:attribute:: email_body :type: str :value: None .. py:method:: __post_init__() .. py:class:: _AlertActionConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'action_type' .. py:attribute:: config_class_map .. py:class:: MonitorThresholdConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Monitor Threshold Config for Monitor Alerts :param drift_type: Feature drift type to apply the threshold on to determine whether a column has drifted significantly enough to be a violation. :type drift_type: FeatureDriftType :param threshold_config: Thresholds for when to consider a column to be in violation. The alert will only fire when the drift value is strictly greater than the threshold. :type threshold_config: ThresholdConfigs .. py:attribute:: drift_type :type: abacusai.api_class.enums.FeatureDriftType :value: None .. py:attribute:: at_risk_threshold :type: float :value: None .. py:attribute:: severely_drifting_threshold :type: float :value: None .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: FeatureMappingConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Feature mapping configuration for a feature group type. :param feature_name: The name of the feature in the feature group. :type feature_name: str :param feature_mapping: The desired feature mapping for the feature. :type feature_mapping: str :param nested_feature_name: The name of the nested feature in the feature group. :type nested_feature_name: str .. py:attribute:: feature_name :type: str .. py:attribute:: feature_mapping :type: str :value: None .. py:attribute:: nested_feature_name :type: str :value: None .. py:class:: ProjectFeatureGroupTypeMappingsConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Project feature group type mappings. :param feature_group_id: The unique identifier for the feature group. :type feature_group_id: str :param feature_group_type: The feature group type. :type feature_group_type: str :param feature_mappings: The feature mappings for the feature group. :type feature_mappings: List[FeatureMappingConfig] .. py:attribute:: feature_group_id :type: str .. py:attribute:: feature_group_type :type: str :value: None .. py:attribute:: feature_mappings :type: List[FeatureMappingConfig] .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: ConstraintConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` Constraint configuration. :param constant: The constant value for the constraint. :type constant: float :param operator: The operator for the constraint. Could be 'EQ', 'LE', 'GE' :type operator: str :param enforcement: The enforcement for the constraint. Could be 'HARD' or 'SOFT' or 'SKIP'. Default is 'HARD' :type enforcement: str :param code: The code for the constraint. :type code: str :param penalty: The penalty for violating the constraint. :type penalty: float .. py:attribute:: constant :type: float .. py:attribute:: operator :type: str .. py:attribute:: enforcement :type: Optional[str] :value: None .. py:attribute:: code :type: Optional[str] :value: None .. py:attribute:: penalty :type: Optional[float] :value: None .. py:class:: ProjectFeatureGroupConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for project feature group configuration. .. py:attribute:: type :type: abacusai.api_class.enums.ProjectConfigType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: ConstraintProjectFeatureGroupConfig Bases: :py:obj:`ProjectFeatureGroupConfig` Constraint project feature group configuration. :param constraints: The constraint for the feature group. Should be a list of one ConstraintConfig. :type constraints: List[ConstraintConfig] .. py:attribute:: constraints :type: List[ConstraintConfig] .. py:method:: __post_init__() .. py:class:: ReviewModeProjectFeatureGroupConfig Bases: :py:obj:`ProjectFeatureGroupConfig` Review mode project feature group configuration. :param is_review_mode: The review mode for the feature group. :type is_review_mode: bool .. py:attribute:: is_review_mode :type: bool .. py:method:: __post_init__() .. py:class:: _ProjectFeatureGroupConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: PythonFunctionArgument Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` A config class for python function arguments :param variable_type: The type of the python function argument :type variable_type: PythonFunctionArgumentType :param name: The name of the python function variable :type name: str :param is_required: Whether the argument is required :type is_required: bool :param value: The value of the argument :type value: Any :param pipeline_variable: The name of the pipeline variable to use as the value :type pipeline_variable: str :param description: The description of the argument :type description: str :param item_type: Type of items when variable_type is LIST :type item_type: str .. py:attribute:: variable_type :type: abacusai.api_class.enums.PythonFunctionArgumentType :value: None .. py:attribute:: name :type: str :value: None .. py:attribute:: is_required :type: bool :value: True .. py:attribute:: value :type: Any :value: None .. py:attribute:: pipeline_variable :type: str :value: None .. py:attribute:: description :type: str :value: None .. py:attribute:: item_type :type: str :value: None .. py:class:: OutputVariableMapping Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` A config class for python function arguments :param variable_type: The type of the python function output argument :type variable_type: PythonFunctionOutputArgumentType :param name: The name of the python function variable :type name: str .. py:attribute:: variable_type :type: abacusai.api_class.enums.PythonFunctionOutputArgumentType :value: None .. py:attribute:: name :type: str :value: None .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: _ApiClassFactory Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class :value: None .. py:attribute:: config_class_key :value: None .. py:attribute:: config_class_map .. py:method:: from_dict(config) :classmethod: .. py:class:: FeatureGroupExportConfig Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` An abstract class for feature group exports. .. py:attribute:: connector_type :type: abacusai.api_class.enums.ConnectorType :value: None .. py:method:: _get_builder() :classmethod: .. py:class:: FileConnectorExportConfig Bases: :py:obj:`FeatureGroupExportConfig` File connector export config for feature groups :param location: The location to export the feature group to :type location: str :param export_file_format: The file format to export the feature group to :type export_file_format: str .. py:attribute:: location :type: str :value: None .. py:attribute:: export_file_format :type: str :value: None .. py:method:: __post_init__() .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: DatabaseConnectorExportConfig Bases: :py:obj:`FeatureGroupExportConfig` Database connector export config for feature groups :param database_connector_id: The ID of the database connector to export the feature group to :type database_connector_id: str :param mode: The mode to export the feature group in :type mode: str :param object_name: The name of the object to export the feature group to :type object_name: str :param id_column: The name of the ID column :type id_column: str :param additional_id_columns: Additional ID columns :type additional_id_columns: List[str] :param data_columns: The data columns to export the feature group to :type data_columns: Dict[str, str] .. py:attribute:: database_connector_id :type: str :value: None .. py:attribute:: mode :type: str :value: None .. py:attribute:: object_name :type: str :value: None .. py:attribute:: id_column :type: str :value: None .. py:attribute:: additional_id_columns :type: List[str] :value: None .. py:attribute:: data_columns :type: Dict[str, str] :value: None .. py:method:: __post_init__() .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: _FeatureGroupExportConfigFactory Bases: :py:obj:`abacusai.api_class.abstract._ApiClassFactory` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: config_abstract_class .. py:attribute:: config_class_key :value: 'connector_type' .. py:attribute:: config_class_map .. py:class:: ApiClass Bases: :py:obj:`abc.ABC` Helper class that provides a standard way to create an ABC using inheritance. .. py:attribute:: _upper_snake_case_keys :type: bool :value: False .. py:attribute:: _support_kwargs :type: bool :value: False .. py:method:: __post_init__() .. py:method:: _get_builder() :classmethod: .. py:method:: __str__() .. py:method:: _repr_html_() .. py:method:: __getitem__(item) .. py:method:: __setitem__(item, value) .. py:method:: _unset_item(item) .. py:method:: get(item, default = None) .. py:method:: pop(item, default = NotImplemented) .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:method:: from_dict(input_dict) :classmethod: .. py:class:: ResponseSection Bases: :py:obj:`abacusai.api_class.abstract.ApiClass` A response section that an agent can return to render specific UI elements. :param type: The type of the response. :type type: ResponseSectionType :param id: The section key of the segment. :type id: str .. py:attribute:: type :type: abacusai.api_class.enums.ResponseSectionType .. py:attribute:: id :type: str .. py:method:: __post_init__() .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:data:: Segment .. py:class:: AgentFlowButtonResponseSection(label, agent_workflow_node_name, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an AI Agent can return to render a button. :param label: The label of the button. :type label: str :param agent_workflow_node_name: The workflow start node to be executed when the button is clicked. :type agent_workflow_node_name: str .. py:attribute:: label :type: str .. py:attribute:: agent_workflow_node_name :type: str .. py:class:: ImageUrlResponseSection(url, height, width, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render an image. :param url: The url of the image to be displayed. :type url: str :param height: The height of the image. :type height: int :param width: The width of the image. :type width: int .. py:attribute:: url :type: str .. py:attribute:: height :type: int .. py:attribute:: width :type: int .. py:class:: TextResponseSection(text, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render text. :param segment: The text to be displayed. :type segment: str .. py:attribute:: segment :type: str .. py:class:: RuntimeSchemaResponseSection(json_schema, ui_schema = None, schema_prop = None) Bases: :py:obj:`ResponseSection` A segment that an agent can return to render json and ui schema in react-jsonschema-form format for workflow nodes. This is primarily used to generate dynamic forms at runtime. If a node returns a runtime schema variable, the UI will render the form upon node execution. :param json_schema: json schema in RJSF format. :type json_schema: dict :param ui_schema: ui schema in RJSF format. :type ui_schema: dict .. py:attribute:: json_schema :type: dict .. py:attribute:: ui_schema :type: dict .. py:class:: CodeResponseSection(code, language, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render code. :param code: The code to be displayed. :type code: str :param language: The language of the code. :type language: CodeLanguage .. py:attribute:: code :type: str .. py:attribute:: language :type: abacusai.api_class.enums.CodeLanguage .. py:class:: Base64ImageResponseSection(b64_image, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render a base64 image. :param b64_image: The base64 image to be displayed. :type b64_image: str .. py:attribute:: b64_image :type: str .. py:class:: CollapseResponseSection(title, content, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render a collapsible component. :param title: The title of the collapsible component. :type title: str :param content: The response section constituting the content of collapsible component :type content: ResponseSection .. py:attribute:: title :type: str .. py:attribute:: content :type: ResponseSection .. py:method:: to_dict() Standardizes converting an ApiClass to dictionary. Keys of response dictionary are converted to camel case. This also validates the fields ( type, value, etc ) received in the dictionary. .. py:class:: ListResponseSection(items, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render a list. :param items: The list items to be displayed. :type items: List[str] .. py:attribute:: items :type: List[str] .. py:class:: ChartResponseSection(chart, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render a chart. :param chart: The chart to be displayed. :type chart: dict .. py:attribute:: chart :type: dict .. py:class:: DataframeResponseSection(df, header = None, section_key = None) Bases: :py:obj:`ResponseSection` A response section that an agent can return to render a pandas dataframe. :param df: The dataframe to be displayed. :type df: pandas.DataFrame :param header: Heading of the table to be displayed. :type header: str .. py:attribute:: df :type: Any .. py:attribute:: header :type: str