abacusai.prediction_client

Classes

PredictionClient

Abacus.AI Prediction API Client. Does not utilize authentication and only contains public prediction methods

Module Contents

class abacusai.prediction_client.PredictionClient(client_options=None)

Bases: abacusai.client.BaseApiClient

Abacus.AI Prediction API Client. Does not utilize authentication and only contains public prediction methods

Parameters:

client_options (ClientOptions) – Optional API client configurations

predict_raw(deployment_token, deployment_id, **kwargs)

Raw interface for returning predictions from Plug and Play deployments.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • **kwargs (dict) – Arbitrary key/value pairs may be passed in and is sent as part of the request body.

lookup_features(deployment_token, deployment_id, query_data, limit_results=None, result_columns=None)

Returns the feature group deployed in the feature store project.

Parameters:
  • deployment_token (str) – A deployment token used to authenticate access to created deployments. This token only authorizes predictions on deployments in this project, so it can be safely embedded inside an application or website.

  • deployment_id (str) – A unique identifier for a deployment created under the project.

  • query_data (dict) – A dictionary where the key is the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the value is the unique value of the same entity.

  • limit_results (int) – If provided, will limit the number of results to the value specified.

  • result_columns (list) – If provided, will limit the columns present in each result to the columns specified in this list.

Return type:

Dict

predict(deployment_token, deployment_id, query_data, **kwargs)

Returns a prediction for Predictive Modeling

Parameters:
  • deployment_token (str) – A deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, and is safe to embed in an application or website.

  • deployment_id (str) – A unique identifier for a deployment created under the project.

  • query_data (dict) – A dictionary where the key is the column name (e.g. a column with name ‘user_id’ in the dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed, and the value is the unique value of the same entity.

Return type:

Dict

predict_multiple(deployment_token, deployment_id, query_data)

Returns a list of predictions for predictive modeling.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, and is safe to embed in an application or website.

  • deployment_id (str) – The unique identifier for a deployment created under the project.

  • query_data (list) – A list of dictionaries, where the ‘key’ is the column name (e.g. a column with name ‘user_id’ in the dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed, and the ‘value’ is the unique value of the same entity.

Return type:

Dict

predict_from_datasets(deployment_token, deployment_id, query_data)

Returns a list of predictions for Predictive Modeling.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier for a deployment created under the project.

  • query_data (dict) – A dictionary where the ‘key’ is the source dataset name, and the ‘value’ is a list of records corresponding to the dataset rows.

Return type:

Dict

predict_lead(deployment_token, deployment_id, query_data, explain_predictions=False, explainer_type=None)

Returns the probability of a user being a lead based on their interaction with the service/product and their own attributes (e.g. income, assets, credit score, etc.). Note that the inputs to this method, wherever applicable, should be the column names in the dataset mapped to the column mappings in our system (e.g. column ‘user_id’ mapped to mapping ‘LEAD_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – A dictionary containing user attributes and/or user’s interaction data with the product/service (e.g. number of clicks, items in cart, etc.).

  • explain_predictions (bool) – Will explain predictions for leads

  • explainer_type (str) – Type of explainer to use for explanations

Return type:

Dict

predict_churn(deployment_token, deployment_id, query_data, explain_predictions=False, explainer_type=None)

Returns the probability of a user to churn out in response to their interactions with the item/product/service. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘churn_result’ mapped to mapping ‘CHURNED_YN’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where the ‘key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the ‘value’ will be the unique value of the same entity.

  • explain_predictions (bool) – Will explain predictions for churn

  • explainer_type (str) – Type of explainer to use for explanations

Return type:

Dict

predict_takeover(deployment_token, deployment_id, query_data)

Returns a probability for each class label associated with the types of fraud or a ‘yes’ or ‘no’ type label for the possibility of fraud. Note that the inputs to this method, wherever applicable, will be the column names in the dataset mapped to the column mappings in our system (e.g., column ‘account_name’ mapped to mapping ‘ACCOUNT_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – A dictionary containing account activity characteristics (e.g., login id, login duration, login type, IP address, etc.).

Return type:

Dict

predict_fraud(deployment_token, deployment_id, query_data)

Returns the probability of a transaction performed under a specific account being fraudulent or not. Note that the inputs to this method, wherever applicable, should be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘account_number’ mapped to the mapping ‘ACCOUNT_ID’ in our system).

Parameters:
  • deployment_token (str) – A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique identifier to a deployment created under the project.

  • query_data (dict) – A dictionary containing transaction attributes (e.g. credit card type, transaction location, transaction amount, etc.).

Return type:

Dict

predict_class(deployment_token, deployment_id, query_data, threshold=None, threshold_class=None, thresholds=None, explain_predictions=False, fixed_features=None, nested=None, explainer_type=None)

Returns a classification prediction

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website.

  • deployment_id (str) – The unique identifier for a deployment created under the project.

  • query_data (dict) – A dictionary where the ‘Key’ is the column name (e.g. a column with the name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the ‘Value’ is the unique value of the same entity.

  • threshold (float) – A float value that is applied on the popular class label.

  • threshold_class (str) – The label upon which the threshold is added (binary labels only).

  • thresholds (Dict) – Maps labels to thresholds (multi-label classification only). Defaults to F1 optimal threshold if computed for the given class, else uses 0.5.

  • explain_predictions (bool) – If True, returns the SHAP explanations for all input features.

  • fixed_features (list) – A set of input features to treat as constant for explanations - only honored when the explainer type is KERNEL_EXPLAINER

  • nested (str) – If specified generates prediction delta for each index of the specified nested feature.

  • explainer_type (str) – The type of explainer to use.

Return type:

Dict

predict_target(deployment_token, deployment_id, query_data, explain_predictions=False, fixed_features=None, nested=None, explainer_type=None)

Returns a prediction from a classification or regression model. Optionally, includes explanations.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier of a deployment created under the project.

  • query_data (dict) – A dictionary where the ‘key’ is the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the ‘value’ is the unique value of the same entity.

  • explain_predictions (bool) – If true, returns the SHAP explanations for all input features.

  • fixed_features (list) – Set of input features to treat as constant for explanations - only honored when the explainer type is KERNEL_EXPLAINER

  • nested (str) – If specified, generates prediction delta for each index of the specified nested feature.

  • explainer_type (str) – The type of explainer to use.

Return type:

Dict

get_anomalies(deployment_token, deployment_id, threshold=None, histogram=False)

Returns a list of anomalies from the training dataset.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • threshold (float) – The threshold score of what is an anomaly. Valid values are between 0.8 and 0.99.

  • histogram (bool) – If True, will return a histogram of the distribution of all points.

Return type:

io.BytesIO

get_timeseries_anomalies(deployment_token, deployment_id, start_timestamp=None, end_timestamp=None, query_data=None, get_all_item_data=False, series_ids=None)

Returns a list of anomalous timestamps from the training dataset.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • start_timestamp (str) – timestamp from which anomalies have to be detected in the training data

  • end_timestamp (str) – timestamp to which anomalies have to be detected in the training data

  • query_data (dict) – additional data on which anomaly detection has to be performed, it can either be a single record or list of records or a json string representing list of records

  • get_all_item_data (bool) – set this to true if anomaly detection has to be performed on all the data related to input ids

  • series_ids (List) – list of series ids on which the anomaly detection has to be performed

Return type:

Dict

is_anomaly(deployment_token, deployment_id, query_data=None)

Returns a list of anomaly attributes based on login information for a specified account. Note that the inputs to this method, wherever applicable, should be the column names in the dataset mapped to the column mappings in our system (e.g. column ‘account_name’ mapped to mapping ‘ACCOUNT_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – The input data for the prediction.

Return type:

Dict

get_event_anomaly_score(deployment_token, deployment_id, query_data=None)

Returns an anomaly score for an event.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – The input data for the prediction.

Return type:

Dict

get_forecast(deployment_token, deployment_id, query_data, future_data=None, num_predictions=None, prediction_start=None, explain_predictions=False, explainer_type=None, get_item_data=False)

Returns a list of forecasts for a given entity under the specified project deployment. Note that the inputs to the deployed model will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘holiday_yn’ mapped to mapping ‘FUTURE’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘store_id’ in your dataset) mapped to the column mapping ITEM_ID that uniquely identifies the entity against which forecasting is performed and ‘Value’ will be the unique value of the same entity.

  • future_data (list) – This will be a list of values known ahead of time that are relevant for forecasting (e.g. State Holidays, National Holidays, etc.). Each element is a dictionary, where the key and the value both will be of type ‘str’. For example future data entered for a Store may be [{“Holiday”:”No”, “Promo”:”Yes”, “Date”: “2015-07-31 00:00:00”}].

  • num_predictions (int) – The number of timestamps to predict in the future.

  • prediction_start (str) – The start date for predictions (e.g., “2015-08-01T00:00:00” as input for mid-night of 2015-08-01).

  • explain_predictions (bool) – Will explain predictions for forecasting

  • explainer_type (str) – Type of explainer to use for explanations

  • get_item_data (bool) – Will return the data corresponding to items in query

Return type:

Dict

get_k_nearest(deployment_token, deployment_id, vector, k=None, distance=None, include_score=False, catalog_id=None)

Returns the k nearest neighbors for the provided embedding vector.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • vector (list) – Input vector to perform the k nearest neighbors with.

  • k (int) – Overrideable number of items to return.

  • distance (str) – Specify the distance function to use. Options include “dot“, “cosine“, “euclidean“, and “manhattan“. Default = “dot“

  • include_score (bool) – If True, will return the score alongside the resulting embedding value.

  • catalog_id (str) – An optional parameter honored only for embeddings that provide a catalog id

Return type:

Dict

get_multiple_k_nearest(deployment_token, deployment_id, queries)

Returns the k nearest neighbors for the queries provided.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • queries (list) – List of mappings of format {“catalogId”: “cat0”, “vectors”: […], “k”: 20, “distance”: “euclidean”}. See getKNearest for additional information about the supported parameters.

get_labels(deployment_token, deployment_id, query_data, return_extracted_entities=False)

Returns a list of scored labels for a document.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – Dictionary where key is “Content” and value is the text from which entities are to be extracted.

  • return_extracted_entities (bool) – (Optional) If True, will return the extracted entities in simpler format

Return type:

Dict

get_entities_from_pdf(deployment_token, deployment_id, pdf=None, doc_id=None, return_extracted_features=False, verbose=False, save_extracted_features=None)

Extracts text from the provided PDF and returns a list of recognized labels and their scores.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • pdf (io.TextIOBase) – (Optional) The pdf to predict on. One of pdf or docId must be specified.

  • doc_id (str) – (Optional) The pdf to predict on. One of pdf or docId must be specified.

  • return_extracted_features (bool) – (Optional) If True, will return all extracted features (e.g. all tokens in a page) from the PDF. Default is False.

  • verbose (bool) – (Optional) If True, will return all the extracted tokens probabilities for all the trained labels. Default is False.

  • save_extracted_features (bool) – (Optional) If True, will save extracted features (i.e. page tokens) so that they can be fetched using the prediction docId. Default is False.

Return type:

Dict

get_recommendations(deployment_token, deployment_id, query_data, num_items=None, page=None, exclude_item_ids=None, score_field=None, scaling_factors=None, restrict_items=None, exclude_items=None, explore_fraction=None, diversity_attribute_name=None, diversity_max_results_per_value=None)

Returns a list of recommendations for a given user under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘time’ mapped to mapping ‘TIMESTAMP’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_name’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the user against which recommendations are made and ‘Value’ will be the unique value of the same item. For example, if you have the column name ‘user_name’ mapped to the column mapping ‘USER_ID’, then the query must have the exact same column name (user_name) as key and the name of the user (John Doe) as value.

  • num_items (int) – The number of items to recommend on one page. By default, it is set to 50 items per page.

  • page (int) – The page number to be displayed. For example, let’s say that the num_items is set to 10 with the total recommendations list size of 50 recommended items, then an input value of 2 in the ‘page’ variable will display a list of items that rank from 11th to 20th.

  • score_field (str) – The relative item scores are returned in a separate field named with the same name as the key (score_field) for this argument.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

  • restrict_items (list) – It allows you to restrict the recommendations to certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, “value3”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”, “value3”, …]” to which to restrict the recommendations to. Let’s take an example where the input to restrict_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. This input will restrict the recommendations to SUVs and Sedans. This type of restriction is particularly useful if there’s a list of items that you know is of use in some particular scenario and you want to restrict the recommendations only to that list.

  • exclude_items (list) – It allows you to exclude certain items from the list of recommendations. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” to exclude from the recommendations. Let’s take an example where the input to exclude_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. The resulting recommendation list will exclude all SUVs and Sedans. This is

  • explore_fraction (float) – Explore fraction.

  • diversity_attribute_name (str) – item attribute column name which is used to ensure diversity of prediction results.

  • diversity_max_results_per_value (int) – maximum number of results per value of diversity_attribute_name.

  • exclude_item_ids (list)

Return type:

Dict

get_personalized_ranking(deployment_token, deployment_id, query_data, preserve_ranks=None, preserve_unknown_items=False, scaling_factors=None)

Returns a list of items with personalized promotions for a given user under the specified project deployment. Note that the inputs to this method, wherever applicable, should be the column names in the dataset mapped to the column mappings in our system (e.g. column ‘item_code’ mapped to mapping ‘ITEM_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model in an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This should be a dictionary with two key-value pairs. The first pair represents a ‘Key’ where the column name (e.g. a column with name ‘user_id’ in the dataset) mapped to the column mapping USER_ID uniquely identifies the user against whom a prediction is made and a ‘Value’ which is the identifier value for that user. The second pair will have a ‘Key’ which will be the name of the column name (e.g. movie_name) mapped to ITEM_ID (unique item identifier) and a ‘Value’ which will be a list of identifiers that uniquely identifies those items.

  • preserve_ranks (list) – List of dictionaries of format {“column”: “col0”, “values”: [“value0, value1”]}, where the ranks of items in query_data is preserved for all the items in “col0” with values, “value0” and “value1”. This option is useful when the desired items are being recommended in the desired order and the ranks for those items need to be kept unchanged during recommendation generation.

  • preserve_unknown_items (bool) – If true, any items that are unknown to the model, will not be reranked, and the original position in the query will be preserved.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

Return type:

Dict

get_ranked_items(deployment_token, deployment_id, query_data, preserve_ranks=None, preserve_unknown_items=False, score_field=None, scaling_factors=None, diversity_attribute_name=None, diversity_max_results_per_value=None)

Returns a list of re-ranked items for a selected user when a list of items is required to be reranked according to the user’s preferences. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘item_code’ mapped to mapping ‘ITEM_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary with two key-value pairs. The first pair represents a ‘Key’ where the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID uniquely identifies the user against whom a prediction is made and a ‘Value’ which is the identifier value for that user. The second pair will have a ‘Key’ which will be the name of the column name (e.g. movie_name) mapped to ITEM_ID (unique item identifier) and a ‘Value’ which will be a list of identifiers that uniquely identifies those items.

  • preserve_ranks (list) – List of dictionaries of format {“column”: “col0”, “values”: [“value0, value1”]}, where the ranks of items in query_data is preserved for all the items in “col0” with values, “value0” and “value1”. This option is useful when the desired items are being recommended in the desired order and the ranks for those items need to be kept unchanged during recommendation generation.

  • preserve_unknown_items (bool) – If true, any items that are unknown to the model, will not be reranked, and the original position in the query will be preserved

  • score_field (str) – The relative item scores are returned in a separate field named with the same name as the key (score_field) for this argument.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there is a type of item that might be less popular but you want to promote it or there is an item that always comes up and you want to demote it.

  • diversity_attribute_name (str) – item attribute column name which is used to ensure diversity of prediction results.

  • diversity_max_results_per_value (int) – maximum number of results per value of diversity_attribute_name.

Return type:

Dict

Returns a list of related items for a given item under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘item_code’ mapped to mapping ‘ITEM_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where the ‘key’ will be the column name (e.g. a column with name ‘user_name’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the user against which related items are determined and the ‘value’ will be the unique value of the same item. For example, if you have the column name ‘user_name’ mapped to the column mapping ‘USER_ID’, then the query must have the exact same column name (user_name) as key and the name of the user (John Doe) as value.

  • num_items (int) – The number of items to recommend on one page. By default, it is set to 50 items per page.

  • page (int) – The page number to be displayed. For example, let’s say that the num_items is set to 10 with the total recommendations list size of 50 recommended items, then an input value of 2 in the ‘page’ variable will display a list of items that rank from 11th to 20th.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

  • restrict_items (list) – It allows you to restrict the recommendations to certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, “value3”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”, “value3”, …]” to which to restrict the recommendations to. Let’s take an example where the input to restrict_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. This input will restrict the recommendations to SUVs and Sedans. This type of restriction is particularly useful if there’s a list of items that you know is of use in some particular scenario and you want to restrict the recommendations only to that list.

  • exclude_items (list) – It allows you to exclude certain items from the list of recommendations. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” to exclude from the recommendations. Let’s take an example where the input to exclude_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. The resulting recommendation list will exclude all SUVs and Sedans. This is particularly useful if there’s a list of items that you know is of no use in some particular scenario and you don’t want to show those items present in that list.

Return type:

Dict

get_chat_response(deployment_token, deployment_id, messages, llm_name=None, num_completion_tokens=None, system_message=None, temperature=0.0, filter_key_values=None, search_score_cutoff=None, chat_config=None)

Return a chat response which continues the conversation based on the input messages and search results.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • messages (list) – A list of chronologically ordered messages, starting with a user message and alternating sources. A message is a dict with attributes: is_user (bool): Whether the message is from the user. text (str): The message’s text.

  • llm_name (str) – Name of the specific LLM backend to use to power the chat experience

  • num_completion_tokens (int) – Default for maximum number of tokens for chat answers

  • system_message (str) – The generative LLM system message

  • temperature (float) – The generative LLM temperature

  • filter_key_values (dict) – A dictionary mapping column names to a list of values to restrict the retrieved search results.

  • search_score_cutoff (float) – Cutoff for the document retriever score. Matching search results below this score will be ignored.

  • chat_config (dict) – A dictionary specifying the query chat config override.

Return type:

Dict

get_chat_response_with_binary_data(deployment_token, deployment_id, messages, llm_name=None, num_completion_tokens=None, system_message=None, temperature=0.0, filter_key_values=None, search_score_cutoff=None, chat_config=None, attachments=None)

Return a chat response which continues the conversation based on the input messages and search results.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • messages (list) – A list of chronologically ordered messages, starting with a user message and alternating sources. A message is a dict with attributes: is_user (bool): Whether the message is from the user. text (str): The message’s text.

  • llm_name (str) – Name of the specific LLM backend to use to power the chat experience

  • num_completion_tokens (int) – Default for maximum number of tokens for chat answers

  • system_message (str) – The generative LLM system message

  • temperature (float) – The generative LLM temperature

  • filter_key_values (dict) – A dictionary mapping column names to a list of values to restrict the retrieved search results.

  • search_score_cutoff (float) – Cutoff for the document retriever score. Matching search results below this score will be ignored.

  • chat_config (dict) – A dictionary specifying the query chat config override.

  • attachments (None) – A dictionary of binary data to use to answer the queries.

Return type:

Dict

get_conversation_response(deployment_id, message, deployment_token, deployment_conversation_id=None, external_session_id=None, llm_name=None, num_completion_tokens=None, system_message=None, temperature=0.0, filter_key_values=None, search_score_cutoff=None, chat_config=None, doc_infos=None)

Return a conversation response which continues the conversation based on the input message and deployment conversation id (if exists).

Parameters:
  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • message (str) – A message from the user

  • deployment_token (str) – A token used to authenticate access to deployments created in this project. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_conversation_id (str) – The unique identifier of a deployment conversation to continue. If not specified, a new one will be created.

  • external_session_id (str) – The user supplied unique identifier of a deployment conversation to continue. If specified, we will use this instead of a internal deployment conversation id.

  • llm_name (str) – Name of the specific LLM backend to use to power the chat experience

  • num_completion_tokens (int) – Default for maximum number of tokens for chat answers

  • system_message (str) – The generative LLM system message

  • temperature (float) – The generative LLM temperature

  • filter_key_values (dict) – A dictionary mapping column names to a list of values to restrict the retrived search results.

  • search_score_cutoff (float) – Cutoff for the document retriever score. Matching search results below this score will be ignored.

  • chat_config (dict) – A dictionary specifiying the query chat config override.

  • doc_infos (list) – An optional list of documents use for the conversation. A keyword ‘doc_id’ is expected to be present in each document for retrieving contents from docstore.

Return type:

Dict

get_conversation_response_with_binary_data(deployment_id, deployment_token, message, deployment_conversation_id=None, external_session_id=None, llm_name=None, num_completion_tokens=None, system_message=None, temperature=0.0, filter_key_values=None, search_score_cutoff=None, chat_config=None, attachments=None)

Return a conversation response which continues the conversation based on the input message and deployment conversation id (if exists).

Parameters:
  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • deployment_token (str) – A token used to authenticate access to deployments created in this project. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • message (str) – A message from the user

  • deployment_conversation_id (str) – The unique identifier of a deployment conversation to continue. If not specified, a new one will be created.

  • external_session_id (str) – The user supplied unique identifier of a deployment conversation to continue. If specified, we will use this instead of a internal deployment conversation id.

  • llm_name (str) – Name of the specific LLM backend to use to power the chat experience

  • num_completion_tokens (int) – Default for maximum number of tokens for chat answers

  • system_message (str) – The generative LLM system message

  • temperature (float) – The generative LLM temperature

  • filter_key_values (dict) – A dictionary mapping column names to a list of values to restrict the retrived search results.

  • search_score_cutoff (float) – Cutoff for the document retriever score. Matching search results below this score will be ignored.

  • chat_config (dict) – A dictionary specifiying the query chat config override.

  • attachments (None) – A dictionary of binary data to use to answer the queries.

Return type:

Dict

get_search_results(deployment_token, deployment_id, query_data, num=15)

Return the most relevant search results to the search query from the uploaded documents.

Parameters:
  • deployment_token (str) – A token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be securely embedded in an application or website.

  • deployment_id (str) – A unique identifier of a deployment created under the project.

  • query_data (dict) – A dictionary where the key is “Content” and the value is the text from which entities are to be extracted.

  • num (int) – Number of search results to return.

Return type:

Dict

get_sentiment(deployment_token, deployment_id, document)

Predicts sentiment on a document

Parameters:
  • deployment_token (str) – A token used to authenticate access to deployments created in this project. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for a deployment created under this project.

  • document (str) – The document to be analyzed for sentiment.

Return type:

Dict

get_entailment(deployment_token, deployment_id, document)

Predicts the classification of the document

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • document (str) – The document to be classified.

Return type:

Dict

get_classification(deployment_token, deployment_id, document)

Predicts the classification of the document

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • document (str) – The document to be classified.

Return type:

Dict

get_summary(deployment_token, deployment_id, query_data)

Returns a JSON of the predicted summary for the given document. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘text’ mapped to mapping ‘DOCUMENT’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – Raw data dictionary containing the required document data - must have a key ‘document’ corresponding to a DOCUMENT type text as value.

Return type:

Dict

predict_language(deployment_token, deployment_id, query_data)

Predicts the language of the text

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments within this project, making it safe to embed this model in an application or website.

  • deployment_id (str) – A unique string identifier for a deployment created under the project.

  • query_data (str) – The input string to detect.

Return type:

Dict

get_assignments(deployment_token, deployment_id, query_data, forced_assignments=None, solve_time_limit_seconds=None, include_all_assignments=False)

Get all positive assignments that match a query.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be safely embedded in an application or website.

  • deployment_id (str) – The unique identifier of a deployment created under the project.

  • query_data (dict) – Specifies the set of assignments being requested. The value for the key can be: 1. A simple scalar value, which is matched exactly 2. A list of values, which matches any element in the list 3. A dictionary with keys lower_in/lower_ex and upper_in/upper_ex, which matches values in an inclusive/exclusive range

  • forced_assignments (dict) – Set of assignments to force and resolve before returning query results.

  • solve_time_limit_seconds (float) – Maximum time in seconds to spend solving the query.

  • include_all_assignments (bool) – If True, will return all assignments, including assignments with value 0. Default is False.

Return type:

Dict

get_alternative_assignments(deployment_token, deployment_id, query_data, add_constraints=None, solve_time_limit_seconds=None, best_alternate_only=False)

Get alternative positive assignments for given query. Optimal assignments are ignored and the alternative assignments are returned instead.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be safely embedded in an application or website.

  • deployment_id (str) – The unique identifier of a deployment created under the project.

  • query_data (dict) – Specifies the set of assignments being requested. The value for the key can be: 1. A simple scalar value, which is matched exactly 2. A list of values, which matches any element in the list 3. A dictionary with keys lower_in/lower_ex and upper_in/upper_ex, which matches values in an inclusive/exclusive range

  • add_constraints (list) – List of constraints dict to apply to the query. The constraint dict should have the following keys: 1. query (dict): Specifies the set of assignment variables involved in the constraint. The format is same as query_data. 2. operator (str): Constraint operator ‘=’ or ‘<=’ or ‘>=’. 3. constant (int): Constraint RHS constant value. 4. coefficient_column (str): Column in Assignment feature group to be used as coefficient for the assignment variables, optional and defaults to 1

  • solve_time_limit_seconds (float) – Maximum time in seconds to spend solving the query.

  • best_alternate_only (bool) – When True only the best alternate will be returned, when False multiple alternates are returned

Return type:

Dict

check_constraints(deployment_token, deployment_id, query_data)

Check for any constraints violated by the overrides.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website.

  • deployment_id (str) – The unique identifier for a deployment created under the project.

  • query_data (dict) – Assignment overrides to the solution.

Return type:

Dict

predict_with_binary_data(deployment_token, deployment_id, blob)

Make predictions for a given blob, e.g. image, audio

Parameters:
  • deployment_token (str) – A token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model in an application or website.

  • deployment_id (str) – A unique identifier to a deployment created under the project.

  • blob (io.TextIOBase) – The multipart/form-data of the data.

Return type:

Dict

describe_image(deployment_token, deployment_id, image, categories, top_n=None)

Describe the similarity between an image and a list of categories.

Parameters:
  • deployment_token (str) – Authentication token to access created deployments. This token is only authorized to predict on deployments in the current project, and can be safely embedded in an application or website.

  • deployment_id (str) – Unique identifier of a deployment created under the project.

  • image (io.TextIOBase) – Image to describe.

  • categories (list) – List of candidate categories to compare with the image.

  • top_n (int) – Return the N most similar categories.

Return type:

Dict

get_text_from_document(deployment_token, deployment_id, document=None, adjust_doc_orientation=False, save_predicted_pdf=False, save_extracted_features=False)

Generate text from a document

Parameters:
  • deployment_token (str) – Authentication token to access created deployments. This token is only authorized to predict on deployments in the current project, and can be safely embedded in an application or website.

  • deployment_id (str) – Unique identifier of a deployment created under the project.

  • document (io.TextIOBase) – Input document which can be an image, pdf, or word document (Some formats might not be supported yet)

  • adjust_doc_orientation (bool) – (Optional) whether to detect the document page orientation and rotate it if needed.

  • save_predicted_pdf (bool) – (Optional) If True, will save the predicted pdf bytes so that they can be fetched using the prediction docId. Default is False.

  • save_extracted_features (bool) – (Optional) If True, will save extracted features (i.e. page tokens) so that they can be fetched using the prediction docId. Default is False.

Return type:

Dict

transcribe_audio(deployment_token, deployment_id, audio)

Transcribe the audio

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to make predictions on deployments in this project, so it can be safely embedded in an application or website.

  • deployment_id (str) – The unique identifier of a deployment created under the project.

  • audio (io.TextIOBase) – The audio to transcribe.

Return type:

Dict

classify_image(deployment_token, deployment_id, image=None, doc_id=None)

Classify an image.

Parameters:
  • deployment_token (str) – A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier to a deployment created under the project.

  • image (io.TextIOBase) – The binary data of the image to classify. One of image or doc_id must be specified.

  • doc_id (str) – The document ID of the image. One of image or doc_id must be specified.

Return type:

Dict

classify_pdf(deployment_token, deployment_id, pdf=None)

Returns a classification prediction from a PDF

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website.

  • deployment_id (str) – The unique identifier for a deployment created under the project.

  • pdf (io.TextIOBase) – (Optional) The pdf to predict on. One of pdf or docId must be specified.

Return type:

Dict

get_cluster(deployment_token, deployment_id, query_data)

Predicts the cluster for given data.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • query_data (dict) – A dictionary where each ‘key’ represents a column name and its corresponding ‘value’ represents the value of that column. For Timeseries Clustering, the ‘key’ should be ITEM_ID, and its value should represent a unique item ID that needs clustering.

Return type:

Dict

get_objects_from_image(deployment_token, deployment_id, image)

Classify an image.

Parameters:
  • deployment_token (str) – A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier to a deployment created under the project.

  • image (io.TextIOBase) – The binary data of the image to detect objects from.

Return type:

Dict

score_image(deployment_token, deployment_id, image)

Score on image.

Parameters:
  • deployment_token (str) – A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier to a deployment created under the project.

  • image (io.TextIOBase) – The binary data of the image to get the score.

Return type:

Dict

transfer_style(deployment_token, deployment_id, source_image, style_image)

Change the source image to adopt the visual style from the style image.

Parameters:
  • deployment_token (str) – A token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model in an application or website.

  • deployment_id (str) – A unique identifier to a deployment created under the project.

  • source_image (io.TextIOBase) – The source image to apply the makeup.

  • style_image (io.TextIOBase) – The image that has the style as a reference.

Return type:

io.BytesIO

generate_image(deployment_token, deployment_id, query_data)

Generate an image from text prompt.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website.

  • deployment_id (str) – A unique identifier to a deployment created under the project.

  • query_data (dict) – Specifies the text prompt. For example, {‘prompt’: ‘a cat’}

Return type:

io.BytesIO

execute_agent(deployment_token, deployment_id, arguments=None, keyword_arguments=None)

Executes a deployed AI agent function using the arguments as keyword arguments to the agent execute function.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • arguments (list) – Positional arguments to the agent execute function.

  • keyword_arguments (dict) – A dictionary where each ‘key’ represents the paramter name and its corresponding ‘value’ represents the value of that parameter for the agent execute function.

Return type:

Dict

get_matrix_agent_schema(deployment_token, deployment_id, query, doc_infos=None, deployment_conversation_id=None, external_session_id=None)

Executes a deployed AI agent function using the arguments as keyword arguments to the agent execute function.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • query (str) – User input query to initialize the matrix computation.

  • doc_infos (list) – An optional list of documents use for constructing the matrix. A keyword ‘doc_id’ is expected to be present in each document for retrieving contents from docstore.

  • deployment_conversation_id (str) – A unique string identifier for the deployment conversation used for the conversation.

  • external_session_id (str) – A unique string identifier for the session used for the conversation. If both deployment_conversation_id and external_session_id are not provided, a new session will be created.

Return type:

Dict

execute_conversation_agent(deployment_token, deployment_id, arguments=None, keyword_arguments=None, deployment_conversation_id=None, external_session_id=None, regenerate=False, doc_infos=None, agent_workflow_node_id=None)

Executes a deployed AI agent function using the arguments as keyword arguments to the agent execute function.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • arguments (list) – Positional arguments to the agent execute function.

  • keyword_arguments (dict) – A dictionary where each ‘key’ represents the paramter name and its corresponding ‘value’ represents the value of that parameter for the agent execute function.

  • deployment_conversation_id (str) – A unique string identifier for the deployment conversation used for the conversation.

  • external_session_id (str) – A unique string identifier for the session used for the conversation. If both deployment_conversation_id and external_session_id are not provided, a new session will be created.

  • regenerate (bool) – If True, will regenerate the response from the last query.

  • doc_infos (list) – An optional list of documents use for the conversation. A keyword ‘doc_id’ is expected to be present in each document for retrieving contents from docstore.

  • agent_workflow_node_id (str) – An optional agent workflow node id to trigger agent execution from an intermediate node.

Return type:

Dict

lookup_matches(deployment_token, deployment_id, data=None, filters=None, num=None, result_columns=None, max_words=None, num_retrieval_margin_words=None, max_words_per_chunk=None, score_multiplier_column=None, min_score=None, required_phrases=None, filter_clause=None, crowding_limits=None, include_text_search=False)

Lookup document retrievers and return the matching documents from the document retriever deployed with given query.

Original documents are splitted into chunks and stored in the document retriever. This lookup function will return the relevant chunks from the document retriever. The returned chunks could be expanded to include more words from the original documents and merged if they are overlapping, and permitted by the settings provided. The returned chunks are sorted by relevance.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments within this project, making it safe to embed this model in an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • data (str) – The query to search for.

  • filters (dict) – A dictionary mapping column names to a list of values to restrict the retrieved search results.

  • num (int) – If provided, will limit the number of results to the value specified.

  • result_columns (list) – If provided, will limit the column properties present in each result to those specified in this list.

  • max_words (int) – If provided, will limit the total number of words in the results to the value specified.

  • num_retrieval_margin_words (int) – If provided, will add this number of words from left and right of the returned chunks.

  • max_words_per_chunk (int) – If provided, will limit the number of words in each chunk to the value specified. If the value provided is smaller than the actual size of chunk on disk, which is determined during document retriever creation, the actual size of chunk will be used. I.e, chunks looked up from document retrievers will not be split into smaller chunks during lookup due to this setting.

  • score_multiplier_column (str) – If provided, will use the values in this column to modify the relevance score of the returned chunks. Values in this column must be numeric.

  • min_score (float) – If provided, will filter out the results with score less than the value specified.

  • required_phrases (list) – If provided, each result will contain at least one of the phrases in the given list. The matching is whitespace and case insensitive.

  • filter_clause (str) – If provided, filter the results of the query using this sql where clause.

  • crowding_limits (dict) – A dictionary mapping metadata columns to the maximum number of results per unique value of the column. This is used to ensure diversity of metadata attribute values in the results. If a particular attribute value has already reached its maximum count, further results with that same attribute value will be excluded from the final result set. An entry in the map can also be a map specifying the limit per attribute value rather than a single limit for all values. This allows a per value limit for attributes. If an attribute value is not present in the map its limit defaults to zero.

  • include_text_search (bool) – If true, combine the ranking of results from a BM25 text search over the documents with the vector search using reciprocal rank fusion. It leverages both lexical and semantic matching for better overall results. It’s particularly valuable in professional, technical, or specialized fields where both precision in terminology and understanding of context are important.

Returns:

The relevant documentation results found from the document retriever.

Return type:

list[DocumentRetrieverLookupResult]

get_completion(deployment_token, deployment_id, prompt)

Returns the finetuned LLM generated completion of the prompt.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • prompt (str) – The prompt given to the finetuned LLM to generate the completion.

Return type:

Dict

execute_agent_with_binary_data(deployment_token, deployment_id, arguments=None, keyword_arguments=None, deployment_conversation_id=None, external_session_id=None, blobs=None)

Executes a deployed AI agent function with binary data as inputs.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • arguments (list) – Positional arguments to the agent execute function.

  • keyword_arguments (dict) – A dictionary where each ‘key’ represents the parameter name and its corresponding ‘value’ represents the value of that parameter for the agent execute function.

  • deployment_conversation_id (str) – A unique string identifier for the deployment conversation used for the conversation.

  • external_session_id (str) – A unique string identifier for the session used for the conversation. If both deployment_conversation_id and external_session_id are not provided, a new session will be created.

  • blobs (None) – A dictionary of binary data to use as inputs to the agent execute function.

Returns:

The result of the agent execution

Return type:

AgentDataExecutionResult

start_autonomous_agent(deployment_token, deployment_id, deployment_conversation_id=None, arguments=None, keyword_arguments=None, save_conversations=True)

Starts a deployed Autonomous agent associated with the given deployment_conversation_id using the arguments and keyword arguments as inputs for execute function of trigger node.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, making it safe to embed this model in an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • deployment_conversation_id (str) – A unique string identifier for the deployment conversation used for the conversation.

  • arguments (list) – Positional arguments to the agent execute function.

  • keyword_arguments (dict) – A dictionary where each ‘key’ represents the parameter name and its corresponding ‘value’ represents the value of that parameter for the agent execute function.

  • save_conversations (bool) – If true then a new conversation will be created for every run of the workflow associated with the agent.

Return type:

Dict

pause_autonomous_agent(deployment_token, deployment_id, deployment_conversation_id)

Pauses a deployed Autonomous agent associated with the given deployment_conversation_id.

Parameters:
  • deployment_token (str) – The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, making it safe to embed this model in an application or website.

  • deployment_id (str) – A unique string identifier for the deployment created under the project.

  • deployment_conversation_id (str) – A unique string identifier for the deployment conversation used for the conversation.

Return type:

Dict