abacusai.prediction_client ========================== .. py:module:: abacusai.prediction_client Classes ------- .. autoapisummary:: abacusai.prediction_client.PredictionClient Module Contents --------------- .. py:class:: PredictionClient(client_options = None) Bases: :py:obj:`abacusai.client.BaseApiClient` Abacus.AI Prediction API Client. Does not utilize authentication and only contains public prediction methods :param client_options: Optional API client configurations :type client_options: ClientOptions .. py:method:: predict_raw(deployment_token, deployment_id, **kwargs) Raw interface for returning predictions from Plug and Play deployments. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param \*\*kwargs: Arbitrary key/value pairs may be passed in and is sent as part of the request body. :type \*\*kwargs: dict .. py:method:: lookup_features(deployment_token, deployment_id, query_data, limit_results = None, result_columns = None) Returns the feature group deployed in the feature store project. :param deployment_token: A deployment token used to authenticate access to created deployments. This token only authorizes predictions on deployments in this project, so it can be safely embedded inside an application or website. :type deployment_token: str :param deployment_id: A unique identifier for a deployment created under the project. :type deployment_id: str :param query_data: A dictionary where the key is the column name (e.g. a column with name 'user_id' in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the value is the unique value of the same entity. :type query_data: dict :param limit_results: If provided, will limit the number of results to the value specified. :type limit_results: int :param result_columns: If provided, will limit the columns present in each result to the columns specified in this list. :type result_columns: list .. py:method:: predict(deployment_token, deployment_id, query_data, **kwargs) Returns a prediction for Predictive Modeling :param deployment_token: A deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, and is safe to embed in an application or website. :type deployment_token: str :param deployment_id: A unique identifier for a deployment created under the project. :type deployment_id: str :param query_data: A dictionary where the key is the column name (e.g. a column with name 'user_id' in the dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed, and the value is the unique value of the same entity. :type query_data: dict .. py:method:: predict_multiple(deployment_token, deployment_id, query_data) Returns a list of predictions for predictive modeling. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, and is safe to embed in an application or website. :type deployment_token: str :param deployment_id: The unique identifier for a deployment created under the project. :type deployment_id: str :param query_data: A list of dictionaries, where the 'key' is the column name (e.g. a column with name 'user_id' in the dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed, and the 'value' is the unique value of the same entity. :type query_data: list .. py:method:: predict_from_datasets(deployment_token, deployment_id, query_data) Returns a list of predictions for Predictive Modeling. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier for a deployment created under the project. :type deployment_id: str :param query_data: A dictionary where the 'key' is the source dataset name, and the 'value' is a list of records corresponding to the dataset rows. :type query_data: dict .. py:method:: predict_lead(deployment_token, deployment_id, query_data, explain_predictions = False, explainer_type = None) Returns the probability of a user being a lead based on their interaction with the service/product and their own attributes (e.g. income, assets, credit score, etc.). Note that the inputs to this method, wherever applicable, should be the column names in the dataset mapped to the column mappings in our system (e.g. column 'user_id' mapped to mapping 'LEAD_ID' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: A dictionary containing user attributes and/or user's interaction data with the product/service (e.g. number of clicks, items in cart, etc.). :type query_data: dict :param explain_predictions: Will explain predictions for leads :type explain_predictions: bool :param explainer_type: Type of explainer to use for explanations :type explainer_type: str .. py:method:: predict_churn(deployment_token, deployment_id, query_data, explain_predictions = False, explainer_type = None) Returns the probability of a user to churn out in response to their interactions with the item/product/service. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column 'churn_result' mapped to mapping 'CHURNED_YN' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: This will be a dictionary where the 'key' will be the column name (e.g. a column with name 'user_id' in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the 'value' will be the unique value of the same entity. :type query_data: dict :param explain_predictions: Will explain predictions for churn :type explain_predictions: bool :param explainer_type: Type of explainer to use for explanations :type explainer_type: str .. py:method:: predict_takeover(deployment_token, deployment_id, query_data) Returns a probability for each class label associated with the types of fraud or a 'yes' or 'no' type label for the possibility of fraud. Note that the inputs to this method, wherever applicable, will be the column names in the dataset mapped to the column mappings in our system (e.g., column 'account_name' mapped to mapping 'ACCOUNT_ID' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: A dictionary containing account activity characteristics (e.g., login id, login duration, login type, IP address, etc.). :type query_data: dict .. py:method:: predict_fraud(deployment_token, deployment_id, query_data) Returns the probability of a transaction performed under a specific account being fraudulent or not. Note that the inputs to this method, wherever applicable, should be the column names in your dataset mapped to the column mappings in our system (e.g. column 'account_number' mapped to the mapping 'ACCOUNT_ID' in our system). :param deployment_token: A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: A dictionary containing transaction attributes (e.g. credit card type, transaction location, transaction amount, etc.). :type query_data: dict .. py:method:: predict_class(deployment_token, deployment_id, query_data, threshold = None, threshold_class = None, thresholds = None, explain_predictions = False, fixed_features = None, nested = None, explainer_type = None) Returns a classification prediction :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website. :type deployment_token: str :param deployment_id: The unique identifier for a deployment created under the project. :type deployment_id: str :param query_data: A dictionary where the 'Key' is the column name (e.g. a column with the name 'user_id' in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the 'Value' is the unique value of the same entity. :type query_data: dict :param threshold: A float value that is applied on the popular class label. :type threshold: float :param threshold_class: The label upon which the threshold is added (binary labels only). :type threshold_class: str :param thresholds: Maps labels to thresholds (multi-label classification only). Defaults to F1 optimal threshold if computed for the given class, else uses 0.5. :type thresholds: Dict :param explain_predictions: If True, returns the SHAP explanations for all input features. :type explain_predictions: bool :param fixed_features: A set of input features to treat as constant for explanations - only honored when the explainer type is KERNEL_EXPLAINER :type fixed_features: list :param nested: If specified generates prediction delta for each index of the specified nested feature. :type nested: str :param explainer_type: The type of explainer to use. :type explainer_type: str .. py:method:: predict_target(deployment_token, deployment_id, query_data, explain_predictions = False, fixed_features = None, nested = None, explainer_type = None) Returns a prediction from a classification or regression model. Optionally, includes explanations. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier of a deployment created under the project. :type deployment_id: str :param query_data: A dictionary where the 'key' is the column name (e.g. a column with name 'user_id' in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and the 'value' is the unique value of the same entity. :type query_data: dict :param explain_predictions: If true, returns the SHAP explanations for all input features. :type explain_predictions: bool :param fixed_features: Set of input features to treat as constant for explanations - only honored when the explainer type is KERNEL_EXPLAINER :type fixed_features: list :param nested: If specified, generates prediction delta for each index of the specified nested feature. :type nested: str :param explainer_type: The type of explainer to use. :type explainer_type: str .. py:method:: get_anomalies(deployment_token, deployment_id, threshold = None, histogram = False) Returns a list of anomalies from the training dataset. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param threshold: The threshold score of what is an anomaly. Valid values are between 0.8 and 0.99. :type threshold: float :param histogram: If True, will return a histogram of the distribution of all points. :type histogram: bool .. py:method:: get_timeseries_anomalies(deployment_token, deployment_id, start_timestamp = None, end_timestamp = None, query_data = None, get_all_item_data = False, series_ids = None) Returns a list of anomalous timestamps from the training dataset. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param start_timestamp: timestamp from which anomalies have to be detected in the training data :type start_timestamp: str :param end_timestamp: timestamp to which anomalies have to be detected in the training data :type end_timestamp: str :param query_data: additional data on which anomaly detection has to be performed, it can either be a single record or list of records or a json string representing list of records :type query_data: dict :param get_all_item_data: set this to true if anomaly detection has to be performed on all the data related to input ids :type get_all_item_data: bool :param series_ids: list of series ids on which the anomaly detection has to be performed :type series_ids: List .. py:method:: is_anomaly(deployment_token, deployment_id, query_data = None) Returns a list of anomaly attributes based on login information for a specified account. Note that the inputs to this method, wherever applicable, should be the column names in the dataset mapped to the column mappings in our system (e.g. column 'account_name' mapped to mapping 'ACCOUNT_ID' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: The input data for the prediction. :type query_data: dict .. py:method:: get_event_anomaly_score(deployment_token, deployment_id, query_data = None) Returns an anomaly score for an event. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: The input data for the prediction. :type query_data: dict .. py:method:: get_forecast(deployment_token, deployment_id, query_data, future_data = None, num_predictions = None, prediction_start = None, explain_predictions = False, explainer_type = None, get_item_data = False) Returns a list of forecasts for a given entity under the specified project deployment. Note that the inputs to the deployed model will be the column names in your dataset mapped to the column mappings in our system (e.g. column 'holiday_yn' mapped to mapping 'FUTURE' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: This will be a dictionary where 'Key' will be the column name (e.g. a column with name 'store_id' in your dataset) mapped to the column mapping ITEM_ID that uniquely identifies the entity against which forecasting is performed and 'Value' will be the unique value of the same entity. :type query_data: dict :param future_data: This will be a list of values known ahead of time that are relevant for forecasting (e.g. State Holidays, National Holidays, etc.). Each element is a dictionary, where the key and the value both will be of type 'str'. For example future data entered for a Store may be [{"Holiday":"No", "Promo":"Yes", "Date": "2015-07-31 00:00:00"}]. :type future_data: list :param num_predictions: The number of timestamps to predict in the future. :type num_predictions: int :param prediction_start: The start date for predictions (e.g., "2015-08-01T00:00:00" as input for mid-night of 2015-08-01). :type prediction_start: str :param explain_predictions: Will explain predictions for forecasting :type explain_predictions: bool :param explainer_type: Type of explainer to use for explanations :type explainer_type: str :param get_item_data: Will return the data corresponding to items in query :type get_item_data: bool .. py:method:: get_k_nearest(deployment_token, deployment_id, vector, k = None, distance = None, include_score = False, catalog_id = None) Returns the k nearest neighbors for the provided embedding vector. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param vector: Input vector to perform the k nearest neighbors with. :type vector: list :param k: Overrideable number of items to return. :type k: int :param distance: Specify the distance function to use. Options include “dot“, “cosine“, “euclidean“, and “manhattan“. Default = “dot“ :type distance: str :param include_score: If True, will return the score alongside the resulting embedding value. :type include_score: bool :param catalog_id: An optional parameter honored only for embeddings that provide a catalog id :type catalog_id: str .. py:method:: get_multiple_k_nearest(deployment_token, deployment_id, queries) Returns the k nearest neighbors for the queries provided. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param queries: List of mappings of format {"catalogId": "cat0", "vectors": [...], "k": 20, "distance": "euclidean"}. See `getKNearest` for additional information about the supported parameters. :type queries: list .. py:method:: get_labels(deployment_token, deployment_id, query_data, return_extracted_entities = False) Returns a list of scored labels for a document. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: Dictionary where key is "Content" and value is the text from which entities are to be extracted. :type query_data: dict :param return_extracted_entities: (Optional) If True, will return the extracted entities in simpler format :type return_extracted_entities: bool .. py:method:: get_entities_from_pdf(deployment_token, deployment_id, pdf = None, doc_id = None, return_extracted_features = False, verbose = False, save_extracted_features = None) Extracts text from the provided PDF and returns a list of recognized labels and their scores. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param pdf: (Optional) The pdf to predict on. One of pdf or docId must be specified. :type pdf: io.TextIOBase :param doc_id: (Optional) The pdf to predict on. One of pdf or docId must be specified. :type doc_id: str :param return_extracted_features: (Optional) If True, will return all extracted features (e.g. all tokens in a page) from the PDF. Default is False. :type return_extracted_features: bool :param verbose: (Optional) If True, will return all the extracted tokens probabilities for all the trained labels. Default is False. :type verbose: bool :param save_extracted_features: (Optional) If True, will save extracted features (i.e. page tokens) so that they can be fetched using the prediction docId. Default is False. :type save_extracted_features: bool .. py:method:: get_recommendations(deployment_token, deployment_id, query_data, num_items = None, page = None, exclude_item_ids = None, score_field = None, scaling_factors = None, restrict_items = None, exclude_items = None, explore_fraction = None, diversity_attribute_name = None, diversity_max_results_per_value = None) Returns a list of recommendations for a given user under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column 'time' mapped to mapping 'TIMESTAMP' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: This will be a dictionary where 'Key' will be the column name (e.g. a column with name 'user_name' in your dataset) mapped to the column mapping USER_ID that uniquely identifies the user against which recommendations are made and 'Value' will be the unique value of the same item. For example, if you have the column name 'user_name' mapped to the column mapping 'USER_ID', then the query must have the exact same column name (user_name) as key and the name of the user (John Doe) as value. :type query_data: dict :param num_items: The number of items to recommend on one page. By default, it is set to 50 items per page. :type num_items: int :param page: The page number to be displayed. For example, let's say that the num_items is set to 10 with the total recommendations list size of 50 recommended items, then an input value of 2 in the 'page' variable will display a list of items that rank from 11th to 20th. :type page: int :param score_field: The relative item scores are returned in a separate field named with the same name as the key (score_field) for this argument. :type score_field: str :param scaling_factors: It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1"], "factor": 1.1}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1"]" in reference to which the model recommendations need to be biased; and the key, "factor" takes the factor by which the item scores are adjusted. Let's take an example where the input to scaling_factors is [{"column": "VehicleType", "values": ["SUV", "Sedan"], "factor": 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there's a type of item that might be less popular but you want to promote it or there's an item that always comes up and you want to demote it. :type scaling_factors: list :param restrict_items: It allows you to restrict the recommendations to certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1", "value3", ...]}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1", "value3", ...]" to which to restrict the recommendations to. Let's take an example where the input to restrict_items is [{"column": "VehicleType", "values": ["SUV", "Sedan"]}]. This input will restrict the recommendations to SUVs and Sedans. This type of restriction is particularly useful if there's a list of items that you know is of use in some particular scenario and you want to restrict the recommendations only to that list. :type restrict_items: list :param exclude_items: It allows you to exclude certain items from the list of recommendations. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1", ...]}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1"]" to exclude from the recommendations. Let's take an example where the input to exclude_items is [{"column": "VehicleType", "values": ["SUV", "Sedan"]}]. The resulting recommendation list will exclude all SUVs and Sedans. This is :type exclude_items: list :param explore_fraction: Explore fraction. :type explore_fraction: float :param diversity_attribute_name: item attribute column name which is used to ensure diversity of prediction results. :type diversity_attribute_name: str :param diversity_max_results_per_value: maximum number of results per value of diversity_attribute_name. :type diversity_max_results_per_value: int .. py:method:: get_personalized_ranking(deployment_token, deployment_id, query_data, preserve_ranks = None, preserve_unknown_items = False, scaling_factors = None) Returns a list of items with personalized promotions for a given user under the specified project deployment. Note that the inputs to this method, wherever applicable, should be the column names in the dataset mapped to the column mappings in our system (e.g. column 'item_code' mapped to mapping 'ITEM_ID' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: This should be a dictionary with two key-value pairs. The first pair represents a 'Key' where the column name (e.g. a column with name 'user_id' in the dataset) mapped to the column mapping USER_ID uniquely identifies the user against whom a prediction is made and a 'Value' which is the identifier value for that user. The second pair will have a 'Key' which will be the name of the column name (e.g. movie_name) mapped to ITEM_ID (unique item identifier) and a 'Value' which will be a list of identifiers that uniquely identifies those items. :type query_data: dict :param preserve_ranks: List of dictionaries of format {"column": "col0", "values": ["value0, value1"]}, where the ranks of items in query_data is preserved for all the items in "col0" with values, "value0" and "value1". This option is useful when the desired items are being recommended in the desired order and the ranks for those items need to be kept unchanged during recommendation generation. :type preserve_ranks: list :param preserve_unknown_items: If true, any items that are unknown to the model, will not be reranked, and the original position in the query will be preserved. :type preserve_unknown_items: bool :param scaling_factors: It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1"], "factor": 1.1}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1"]" in reference to which the model recommendations need to be biased; and the key, "factor" takes the factor by which the item scores are adjusted. Let's take an example where the input to scaling_factors is [{"column": "VehicleType", "values": ["SUV", "Sedan"], "factor": 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there's a type of item that might be less popular but you want to promote it or there's an item that always comes up and you want to demote it. :type scaling_factors: list .. py:method:: get_ranked_items(deployment_token, deployment_id, query_data, preserve_ranks = None, preserve_unknown_items = False, score_field = None, scaling_factors = None, diversity_attribute_name = None, diversity_max_results_per_value = None) Returns a list of re-ranked items for a selected user when a list of items is required to be reranked according to the user's preferences. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column 'item_code' mapped to mapping 'ITEM_ID' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: This will be a dictionary with two key-value pairs. The first pair represents a 'Key' where the column name (e.g. a column with name 'user_id' in your dataset) mapped to the column mapping USER_ID uniquely identifies the user against whom a prediction is made and a 'Value' which is the identifier value for that user. The second pair will have a 'Key' which will be the name of the column name (e.g. movie_name) mapped to ITEM_ID (unique item identifier) and a 'Value' which will be a list of identifiers that uniquely identifies those items. :type query_data: dict :param preserve_ranks: List of dictionaries of format {"column": "col0", "values": ["value0, value1"]}, where the ranks of items in query_data is preserved for all the items in "col0" with values, "value0" and "value1". This option is useful when the desired items are being recommended in the desired order and the ranks for those items need to be kept unchanged during recommendation generation. :type preserve_ranks: list :param preserve_unknown_items: If true, any items that are unknown to the model, will not be reranked, and the original position in the query will be preserved :type preserve_unknown_items: bool :param score_field: The relative item scores are returned in a separate field named with the same name as the key (score_field) for this argument. :type score_field: str :param scaling_factors: It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1"], "factor": 1.1}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1"]" in reference to which the model recommendations need to be biased; and the key, "factor" takes the factor by which the item scores are adjusted. Let's take an example where the input to scaling_factors is [{"column": "VehicleType", "values": ["SUV", "Sedan"], "factor": 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there is a type of item that might be less popular but you want to promote it or there is an item that always comes up and you want to demote it. :type scaling_factors: list :param diversity_attribute_name: item attribute column name which is used to ensure diversity of prediction results. :type diversity_attribute_name: str :param diversity_max_results_per_value: maximum number of results per value of diversity_attribute_name. :type diversity_max_results_per_value: int .. py:method:: get_related_items(deployment_token, deployment_id, query_data, num_items = None, page = None, scaling_factors = None, restrict_items = None, exclude_items = None) Returns a list of related items for a given item under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column 'item_code' mapped to mapping 'ITEM_ID' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: This will be a dictionary where the 'key' will be the column name (e.g. a column with name 'user_name' in your dataset) mapped to the column mapping USER_ID that uniquely identifies the user against which related items are determined and the 'value' will be the unique value of the same item. For example, if you have the column name 'user_name' mapped to the column mapping 'USER_ID', then the query must have the exact same column name (user_name) as key and the name of the user (John Doe) as value. :type query_data: dict :param num_items: The number of items to recommend on one page. By default, it is set to 50 items per page. :type num_items: int :param page: The page number to be displayed. For example, let's say that the num_items is set to 10 with the total recommendations list size of 50 recommended items, then an input value of 2 in the 'page' variable will display a list of items that rank from 11th to 20th. :type page: int :param scaling_factors: It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1"], "factor": 1.1}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1"]" in reference to which the model recommendations need to be biased; and the key, "factor" takes the factor by which the item scores are adjusted. Let's take an example where the input to scaling_factors is [{"column": "VehicleType", "values": ["SUV", "Sedan"], "factor": 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there's a type of item that might be less popular but you want to promote it or there's an item that always comes up and you want to demote it. :type scaling_factors: list :param restrict_items: It allows you to restrict the recommendations to certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1", "value3", ...]}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1", "value3", ...]" to which to restrict the recommendations to. Let's take an example where the input to restrict_items is [{"column": "VehicleType", "values": ["SUV", "Sedan"]}]. This input will restrict the recommendations to SUVs and Sedans. This type of restriction is particularly useful if there's a list of items that you know is of use in some particular scenario and you want to restrict the recommendations only to that list. :type restrict_items: list :param exclude_items: It allows you to exclude certain items from the list of recommendations. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {"column": "col0", "values": ["value0", "value1", ...]}. The key, "column" takes the name of the column, "col0"; the key, "values" takes the list of items, "["value0", "value1"]" to exclude from the recommendations. Let's take an example where the input to exclude_items is [{"column": "VehicleType", "values": ["SUV", "Sedan"]}]. The resulting recommendation list will exclude all SUVs and Sedans. This is particularly useful if there's a list of items that you know is of no use in some particular scenario and you don't want to show those items present in that list. :type exclude_items: list .. py:method:: get_chat_response(deployment_token, deployment_id, messages, llm_name = None, num_completion_tokens = None, system_message = None, temperature = 0.0, filter_key_values = None, search_score_cutoff = None, chat_config = None) Return a chat response which continues the conversation based on the input messages and search results. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param messages: A list of chronologically ordered messages, starting with a user message and alternating sources. A message is a dict with attributes: is_user (bool): Whether the message is from the user. text (str): The message's text. :type messages: list :param llm_name: Name of the specific LLM backend to use to power the chat experience :type llm_name: str :param num_completion_tokens: Default for maximum number of tokens for chat answers :type num_completion_tokens: int :param system_message: The generative LLM system message :type system_message: str :param temperature: The generative LLM temperature :type temperature: float :param filter_key_values: A dictionary mapping column names to a list of values to restrict the retrieved search results. :type filter_key_values: dict :param search_score_cutoff: Cutoff for the document retriever score. Matching search results below this score will be ignored. :type search_score_cutoff: float :param chat_config: A dictionary specifying the query chat config override. :type chat_config: dict .. py:method:: get_chat_response_with_binary_data(deployment_token, deployment_id, messages, llm_name = None, num_completion_tokens = None, system_message = None, temperature = 0.0, filter_key_values = None, search_score_cutoff = None, chat_config = None, attachments = None) Return a chat response which continues the conversation based on the input messages and search results. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param messages: A list of chronologically ordered messages, starting with a user message and alternating sources. A message is a dict with attributes: is_user (bool): Whether the message is from the user. text (str): The message's text. :type messages: list :param llm_name: Name of the specific LLM backend to use to power the chat experience :type llm_name: str :param num_completion_tokens: Default for maximum number of tokens for chat answers :type num_completion_tokens: int :param system_message: The generative LLM system message :type system_message: str :param temperature: The generative LLM temperature :type temperature: float :param filter_key_values: A dictionary mapping column names to a list of values to restrict the retrieved search results. :type filter_key_values: dict :param search_score_cutoff: Cutoff for the document retriever score. Matching search results below this score will be ignored. :type search_score_cutoff: float :param chat_config: A dictionary specifying the query chat config override. :type chat_config: dict :param attachments: A dictionary of binary data to use to answer the queries. :type attachments: None .. py:method:: get_conversation_response(deployment_id, message, deployment_token, deployment_conversation_id = None, external_session_id = None, llm_name = None, num_completion_tokens = None, system_message = None, temperature = 0.0, filter_key_values = None, search_score_cutoff = None, chat_config = None, doc_infos = None) Return a conversation response which continues the conversation based on the input message and deployment conversation id (if exists). :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param message: A message from the user :type message: str :param deployment_token: A token used to authenticate access to deployments created in this project. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_conversation_id: The unique identifier of a deployment conversation to continue. If not specified, a new one will be created. :type deployment_conversation_id: str :param external_session_id: The user supplied unique identifier of a deployment conversation to continue. If specified, we will use this instead of a internal deployment conversation id. :type external_session_id: str :param llm_name: Name of the specific LLM backend to use to power the chat experience :type llm_name: str :param num_completion_tokens: Default for maximum number of tokens for chat answers :type num_completion_tokens: int :param system_message: The generative LLM system message :type system_message: str :param temperature: The generative LLM temperature :type temperature: float :param filter_key_values: A dictionary mapping column names to a list of values to restrict the retrived search results. :type filter_key_values: dict :param search_score_cutoff: Cutoff for the document retriever score. Matching search results below this score will be ignored. :type search_score_cutoff: float :param chat_config: A dictionary specifiying the query chat config override. :type chat_config: dict :param doc_infos: An optional list of documents use for the conversation. A keyword 'doc_id' is expected to be present in each document for retrieving contents from docstore. :type doc_infos: list .. py:method:: get_conversation_response_with_binary_data(deployment_id, deployment_token, message, deployment_conversation_id = None, external_session_id = None, llm_name = None, num_completion_tokens = None, system_message = None, temperature = 0.0, filter_key_values = None, search_score_cutoff = None, chat_config = None, attachments = None) Return a conversation response which continues the conversation based on the input message and deployment conversation id (if exists). :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param deployment_token: A token used to authenticate access to deployments created in this project. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param message: A message from the user :type message: str :param deployment_conversation_id: The unique identifier of a deployment conversation to continue. If not specified, a new one will be created. :type deployment_conversation_id: str :param external_session_id: The user supplied unique identifier of a deployment conversation to continue. If specified, we will use this instead of a internal deployment conversation id. :type external_session_id: str :param llm_name: Name of the specific LLM backend to use to power the chat experience :type llm_name: str :param num_completion_tokens: Default for maximum number of tokens for chat answers :type num_completion_tokens: int :param system_message: The generative LLM system message :type system_message: str :param temperature: The generative LLM temperature :type temperature: float :param filter_key_values: A dictionary mapping column names to a list of values to restrict the retrived search results. :type filter_key_values: dict :param search_score_cutoff: Cutoff for the document retriever score. Matching search results below this score will be ignored. :type search_score_cutoff: float :param chat_config: A dictionary specifiying the query chat config override. :type chat_config: dict :param attachments: A dictionary of binary data to use to answer the queries. :type attachments: None .. py:method:: get_search_results(deployment_token, deployment_id, query_data, num = 15) Return the most relevant search results to the search query from the uploaded documents. :param deployment_token: A token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be securely embedded in an application or website. :type deployment_token: str :param deployment_id: A unique identifier of a deployment created under the project. :type deployment_id: str :param query_data: A dictionary where the key is "Content" and the value is the text from which entities are to be extracted. :type query_data: dict :param num: Number of search results to return. :type num: int .. py:method:: get_sentiment(deployment_token, deployment_id, document) Predicts sentiment on a document :param deployment_token: A token used to authenticate access to deployments created in this project. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for a deployment created under this project. :type deployment_id: str :param document: The document to be analyzed for sentiment. :type document: str .. py:method:: get_entailment(deployment_token, deployment_id, document) Predicts the classification of the document :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param document: The document to be classified. :type document: str .. py:method:: get_classification(deployment_token, deployment_id, document) Predicts the classification of the document :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param document: The document to be classified. :type document: str .. py:method:: get_summary(deployment_token, deployment_id, query_data) Returns a JSON of the predicted summary for the given document. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column 'text' mapped to mapping 'DOCUMENT' in our system). :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: Raw data dictionary containing the required document data - must have a key 'document' corresponding to a DOCUMENT type text as value. :type query_data: dict .. py:method:: predict_language(deployment_token, deployment_id, query_data) Predicts the language of the text :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments within this project, making it safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for a deployment created under the project. :type deployment_id: str :param query_data: The input string to detect. :type query_data: str .. py:method:: get_assignments(deployment_token, deployment_id, query_data, forced_assignments = None, solve_time_limit_seconds = None, include_all_assignments = False) Get all positive assignments that match a query. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be safely embedded in an application or website. :type deployment_token: str :param deployment_id: The unique identifier of a deployment created under the project. :type deployment_id: str :param query_data: Specifies the set of assignments being requested. The value for the key can be: 1. A simple scalar value, which is matched exactly 2. A list of values, which matches any element in the list 3. A dictionary with keys lower_in/lower_ex and upper_in/upper_ex, which matches values in an inclusive/exclusive range :type query_data: dict :param forced_assignments: Set of assignments to force and resolve before returning query results. :type forced_assignments: dict :param solve_time_limit_seconds: Maximum time in seconds to spend solving the query. :type solve_time_limit_seconds: float :param include_all_assignments: If True, will return all assignments, including assignments with value 0. Default is False. :type include_all_assignments: bool .. py:method:: get_alternative_assignments(deployment_token, deployment_id, query_data, add_constraints = None, solve_time_limit_seconds = None, best_alternate_only = False) Get alternative positive assignments for given query. Optimal assignments are ignored and the alternative assignments are returned instead. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be safely embedded in an application or website. :type deployment_token: str :param deployment_id: The unique identifier of a deployment created under the project. :type deployment_id: str :param query_data: Specifies the set of assignments being requested. The value for the key can be: 1. A simple scalar value, which is matched exactly 2. A list of values, which matches any element in the list 3. A dictionary with keys lower_in/lower_ex and upper_in/upper_ex, which matches values in an inclusive/exclusive range :type query_data: dict :param add_constraints: List of constraints dict to apply to the query. The constraint dict should have the following keys: 1. query (dict): Specifies the set of assignment variables involved in the constraint. The format is same as query_data. 2. operator (str): Constraint operator '=' or '<=' or '>='. 3. constant (int): Constraint RHS constant value. 4. coefficient_column (str): Column in Assignment feature group to be used as coefficient for the assignment variables, optional and defaults to 1 :type add_constraints: list :param solve_time_limit_seconds: Maximum time in seconds to spend solving the query. :type solve_time_limit_seconds: float :param best_alternate_only: When True only the best alternate will be returned, when False multiple alternates are returned :type best_alternate_only: bool .. py:method:: get_assignments_online_with_new_serialized_inputs(deployment_token, deployment_id, query_data = None, solve_time_limit_seconds = None) Get assignments for given query, with new inputs :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it can be safely embedded in an application or website. :type deployment_token: str :param deployment_id: The unique identifier of a deployment created under the project. :type deployment_id: str :param query_data: a dictionary with assignment, constraint and constraint_equations_df :type query_data: dict :param solve_time_limit_seconds: Maximum time in seconds to spend solving the query. :type solve_time_limit_seconds: float .. py:method:: check_constraints(deployment_token, deployment_id, query_data) Check for any constraints violated by the overrides. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website. :type deployment_token: str :param deployment_id: The unique identifier for a deployment created under the project. :type deployment_id: str :param query_data: Assignment overrides to the solution. :type query_data: dict .. py:method:: predict_with_binary_data(deployment_token, deployment_id, blob) Make predictions for a given blob, e.g. image, audio :param deployment_token: A token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: A unique identifier to a deployment created under the project. :type deployment_id: str :param blob: The multipart/form-data of the data. :type blob: io.TextIOBase .. py:method:: describe_image(deployment_token, deployment_id, image, categories, top_n = None) Describe the similarity between an image and a list of categories. :param deployment_token: Authentication token to access created deployments. This token is only authorized to predict on deployments in the current project, and can be safely embedded in an application or website. :type deployment_token: str :param deployment_id: Unique identifier of a deployment created under the project. :type deployment_id: str :param image: Image to describe. :type image: io.TextIOBase :param categories: List of candidate categories to compare with the image. :type categories: list :param top_n: Return the N most similar categories. :type top_n: int .. py:method:: get_text_from_document(deployment_token, deployment_id, document = None, adjust_doc_orientation = False, save_predicted_pdf = False, save_extracted_features = False) Generate text from a document :param deployment_token: Authentication token to access created deployments. This token is only authorized to predict on deployments in the current project, and can be safely embedded in an application or website. :type deployment_token: str :param deployment_id: Unique identifier of a deployment created under the project. :type deployment_id: str :param document: Input document which can be an image, pdf, or word document (Some formats might not be supported yet) :type document: io.TextIOBase :param adjust_doc_orientation: (Optional) whether to detect the document page orientation and rotate it if needed. :type adjust_doc_orientation: bool :param save_predicted_pdf: (Optional) If True, will save the predicted pdf bytes so that they can be fetched using the prediction docId. Default is False. :type save_predicted_pdf: bool :param save_extracted_features: (Optional) If True, will save extracted features (i.e. page tokens) so that they can be fetched using the prediction docId. Default is False. :type save_extracted_features: bool .. py:method:: transcribe_audio(deployment_token, deployment_id, audio) Transcribe the audio :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to make predictions on deployments in this project, so it can be safely embedded in an application or website. :type deployment_token: str :param deployment_id: The unique identifier of a deployment created under the project. :type deployment_id: str :param audio: The audio to transcribe. :type audio: io.TextIOBase .. py:method:: classify_image(deployment_token, deployment_id, image = None, doc_id = None) Classify an image. :param deployment_token: A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier to a deployment created under the project. :type deployment_id: str :param image: The binary data of the image to classify. One of image or doc_id must be specified. :type image: io.TextIOBase :param doc_id: The document ID of the image. One of image or doc_id must be specified. :type doc_id: str .. py:method:: classify_pdf(deployment_token, deployment_id, pdf = None) Returns a classification prediction from a PDF :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website. :type deployment_token: str :param deployment_id: The unique identifier for a deployment created under the project. :type deployment_id: str :param pdf: (Optional) The pdf to predict on. One of pdf or docId must be specified. :type pdf: io.TextIOBase .. py:method:: get_cluster(deployment_token, deployment_id, query_data) Predicts the cluster for given data. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param query_data: A dictionary where each 'key' represents a column name and its corresponding 'value' represents the value of that column. For Timeseries Clustering, the 'key' should be ITEM_ID, and its value should represent a unique item ID that needs clustering. :type query_data: dict .. py:method:: get_objects_from_image(deployment_token, deployment_id, image) Classify an image. :param deployment_token: A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier to a deployment created under the project. :type deployment_id: str :param image: The binary data of the image to detect objects from. :type image: io.TextIOBase .. py:method:: score_image(deployment_token, deployment_id, image) Score on image. :param deployment_token: A deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier to a deployment created under the project. :type deployment_id: str :param image: The binary data of the image to get the score. :type image: io.TextIOBase .. py:method:: transfer_style(deployment_token, deployment_id, source_image, style_image) Change the source image to adopt the visual style from the style image. :param deployment_token: A token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: A unique identifier to a deployment created under the project. :type deployment_id: str :param source_image: The source image to apply the makeup. :type source_image: io.TextIOBase :param style_image: The image that has the style as a reference. :type style_image: io.TextIOBase .. py:method:: generate_image(deployment_token, deployment_id, query_data) Generate an image from text prompt. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model within an application or website. :type deployment_token: str :param deployment_id: A unique identifier to a deployment created under the project. :type deployment_id: str :param query_data: Specifies the text prompt. For example, {'prompt': 'a cat'} :type query_data: dict .. py:method:: execute_agent(deployment_token, deployment_id, arguments = None, keyword_arguments = None) Executes a deployed AI agent function using the arguments as keyword arguments to the agent execute function. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param arguments: Positional arguments to the agent execute function. :type arguments: list :param keyword_arguments: A dictionary where each 'key' represents the paramter name and its corresponding 'value' represents the value of that parameter for the agent execute function. :type keyword_arguments: dict .. py:method:: get_matrix_agent_schema(deployment_token, deployment_id, query, doc_infos = None, deployment_conversation_id = None, external_session_id = None) Executes a deployed AI agent function using the arguments as keyword arguments to the agent execute function. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param query: User input query to initialize the matrix computation. :type query: str :param doc_infos: An optional list of documents use for constructing the matrix. A keyword 'doc_id' is expected to be present in each document for retrieving contents from docstore. :type doc_infos: list :param deployment_conversation_id: A unique string identifier for the deployment conversation used for the conversation. :type deployment_conversation_id: str :param external_session_id: A unique string identifier for the session used for the conversation. If both deployment_conversation_id and external_session_id are not provided, a new session will be created. :type external_session_id: str .. py:method:: execute_conversation_agent(deployment_token, deployment_id, arguments = None, keyword_arguments = None, deployment_conversation_id = None, external_session_id = None, regenerate = False, doc_infos = None, agent_workflow_node_id = None) Executes a deployed AI agent function using the arguments as keyword arguments to the agent execute function. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param arguments: Positional arguments to the agent execute function. :type arguments: list :param keyword_arguments: A dictionary where each 'key' represents the paramter name and its corresponding 'value' represents the value of that parameter for the agent execute function. :type keyword_arguments: dict :param deployment_conversation_id: A unique string identifier for the deployment conversation used for the conversation. :type deployment_conversation_id: str :param external_session_id: A unique string identifier for the session used for the conversation. If both deployment_conversation_id and external_session_id are not provided, a new session will be created. :type external_session_id: str :param regenerate: If True, will regenerate the response from the last query. :type regenerate: bool :param doc_infos: An optional list of documents use for the conversation. A keyword 'doc_id' is expected to be present in each document for retrieving contents from docstore. :type doc_infos: list :param agent_workflow_node_id: An optional agent workflow node id to trigger agent execution from an intermediate node. :type agent_workflow_node_id: str .. py:method:: lookup_matches(deployment_token, deployment_id, data = None, filters = None, num = None, result_columns = None, max_words = None, num_retrieval_margin_words = None, max_words_per_chunk = None, score_multiplier_column = None, min_score = None, required_phrases = None, filter_clause = None, crowding_limits = None, include_text_search = False) Lookup document retrievers and return the matching documents from the document retriever deployed with given query. Original documents are splitted into chunks and stored in the document retriever. This lookup function will return the relevant chunks from the document retriever. The returned chunks could be expanded to include more words from the original documents and merged if they are overlapping, and permitted by the settings provided. The returned chunks are sorted by relevance. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments within this project, making it safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param data: The query to search for. :type data: str :param filters: A dictionary mapping column names to a list of values to restrict the retrieved search results. :type filters: dict :param num: If provided, will limit the number of results to the value specified. :type num: int :param result_columns: If provided, will limit the column properties present in each result to those specified in this list. :type result_columns: list :param max_words: If provided, will limit the total number of words in the results to the value specified. :type max_words: int :param num_retrieval_margin_words: If provided, will add this number of words from left and right of the returned chunks. :type num_retrieval_margin_words: int :param max_words_per_chunk: If provided, will limit the number of words in each chunk to the value specified. If the value provided is smaller than the actual size of chunk on disk, which is determined during document retriever creation, the actual size of chunk will be used. I.e, chunks looked up from document retrievers will not be split into smaller chunks during lookup due to this setting. :type max_words_per_chunk: int :param score_multiplier_column: If provided, will use the values in this column to modify the relevance score of the returned chunks. Values in this column must be numeric. :type score_multiplier_column: str :param min_score: If provided, will filter out the results with score less than the value specified. :type min_score: float :param required_phrases: If provided, each result will contain at least one of the phrases in the given list. The matching is whitespace and case insensitive. :type required_phrases: list :param filter_clause: If provided, filter the results of the query using this sql where clause. :type filter_clause: str :param crowding_limits: A dictionary mapping metadata columns to the maximum number of results per unique value of the column. This is used to ensure diversity of metadata attribute values in the results. If a particular attribute value has already reached its maximum count, further results with that same attribute value will be excluded from the final result set. An entry in the map can also be a map specifying the limit per attribute value rather than a single limit for all values. This allows a per value limit for attributes. If an attribute value is not present in the map its limit defaults to zero. :type crowding_limits: dict :param include_text_search: If true, combine the ranking of results from a BM25 text search over the documents with the vector search using reciprocal rank fusion. It leverages both lexical and semantic matching for better overall results. It's particularly valuable in professional, technical, or specialized fields where both precision in terminology and understanding of context are important. :type include_text_search: bool :returns: The relevant documentation results found from the document retriever. :rtype: list[DocumentRetrieverLookupResult] .. py:method:: get_completion(deployment_token, deployment_id, prompt) Returns the finetuned LLM generated completion of the prompt. :param deployment_token: The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: The unique identifier to a deployment created under the project. :type deployment_id: str :param prompt: The prompt given to the finetuned LLM to generate the completion. :type prompt: str .. py:method:: execute_agent_with_binary_data(deployment_token, deployment_id, arguments = None, keyword_arguments = None, deployment_conversation_id = None, external_session_id = None, blobs = None) Executes a deployed AI agent function with binary data as inputs. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param arguments: Positional arguments to the agent execute function. :type arguments: list :param keyword_arguments: A dictionary where each 'key' represents the parameter name and its corresponding 'value' represents the value of that parameter for the agent execute function. :type keyword_arguments: dict :param deployment_conversation_id: A unique string identifier for the deployment conversation used for the conversation. :type deployment_conversation_id: str :param external_session_id: A unique string identifier for the session used for the conversation. If both deployment_conversation_id and external_session_id are not provided, a new session will be created. :type external_session_id: str :param blobs: A dictionary of binary data to use as inputs to the agent execute function. :type blobs: None :returns: The result of the agent execution :rtype: AgentDataExecutionResult .. py:method:: start_autonomous_agent(deployment_token, deployment_id, arguments = None, keyword_arguments = None, save_conversations = True) Starts a deployed Autonomous agent associated with the given deployment_conversation_id using the arguments and keyword arguments as inputs for execute function of trigger node. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, making it safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param arguments: Positional arguments to the agent execute function. :type arguments: list :param keyword_arguments: A dictionary where each 'key' represents the parameter name and its corresponding 'value' represents the value of that parameter for the agent execute function. :type keyword_arguments: dict :param save_conversations: If true then a new conversation will be created for every run of the workflow associated with the agent. :type save_conversations: bool .. py:method:: pause_autonomous_agent(deployment_token, deployment_id, deployment_conversation_id) Pauses a deployed Autonomous agent associated with the given deployment_conversation_id. :param deployment_token: The deployment token used to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, making it safe to embed this model in an application or website. :type deployment_token: str :param deployment_id: A unique string identifier for the deployment created under the project. :type deployment_id: str :param deployment_conversation_id: A unique string identifier for the deployment conversation used for the conversation. :type deployment_conversation_id: str