Generative AI-powered Agents for Automated Workflows
The relentless pressure to maintain operational efficiency is a challenge faced by companies of all sizes. This challenge is further amplified by the ever-increasing volumes of data, complex systems, and the sheer number of customer interactions that need to be managed. Traditional manual processes and the often-disjointed nature of information sources can lead to significant bottlenecks. These obstructions slow down decision-making and prevent teams from dedicating their time and energy to higher-value work that truly moves the needle. Generative AI agents represent a transformative solution. These agents can automatically interface with a company’s existing systems, execute a wide range of tasks, and provide instant insights. This allows organizations to scale their operations effectively without a corresponding increase in complexity.
Amazon Bedrock in SageMaker Unified Studio directly addresses these pervasive challenges. It offers a unified service designed for building AI-driven solutions. This platform centralizes customer data and enables natural language interactions, making it intuitive and user-friendly. A key advantage is its seamless integration with existing applications. It also incorporates essential Amazon Bedrock features, including a wide selection of foundation models (FMs), prompt engineering capabilities, knowledge bases for contextual understanding, agents for task execution, flows for workflow orchestration, evaluation tools for performance monitoring, and guardrails for responsible AI development. Users can conveniently access this comprehensive suite of AI capabilities through their organization’s single sign-on (SSO) system. This fosters collaboration among team members and allows for the refinement of AI applications without requiring direct access to the AWS Management Console.
Amazon Bedrock in SageMaker Unified Studio empowers you to create and deploy sophisticated generative AI agents. These agents can seamlessly integrate with your organization’s applications, databases, and even third-party systems. This level of integration enables natural language interactions across your entire technology stack. The chat agent acts as a crucial bridge, connecting complex information systems with user-friendly communication. By leveraging Amazon Bedrock functions and Amazon Bedrock Knowledge Bases, the agent gains the ability to connect with diverse data sources. These sources can range from JIRA APIs for real-time project status tracking to customer relationship management (CRM) systems for retrieving customer information. The agent can also update project tasks, manage user preferences, and much more.
This comprehensive functionality provides significant benefits to various teams within an organization. Sales and marketing teams can gain rapid access to customer information and their preferred meeting times. Project managers can efficiently manage JIRA tasks and timelines, optimizing project workflows. This streamlined process, facilitated by the AI agent, leads to enhanced productivity and improved customer interactions across the entire organization.
Solution Overview
Amazon Bedrock provides a governed, collaborative environment, all within SageMaker Unified Studio, to build and share generative AI applications. Let’s delve into a practical example solution that demonstrates the implementation of a customer management agent:
- Agentic Chat: A sophisticated agentic chat application can be built using Amazon Bedrock’s chat application features. This chat application can be seamlessly integrated with functions that are easily built using other AWS services, such as AWS Lambda for serverless compute and Amazon API Gateway for creating and managing APIs.
- Data Management: SageMaker Unified Studio, in conjunction with Amazon DataZone, offers a comprehensive data management solution through its integrated services. Organization administrators have fine-grained control over member access to Amazon Bedrock models and features. This ensures secure identity management and granular access control, maintaining data security and compliance.
Before we dive deep into the deployment of the AI agent, it’s beneficial to walk through the key steps of the architecture.
The workflow unfolds as follows:
- User Authentication and Interaction: The user initiates the process by logging into SageMaker Unified Studio using their organization’s SSO credentials from AWS IAM Identity Center. Once authenticated, the user interacts with the chat application using natural language, posing questions or making requests.
- Function Invocation: The Amazon Bedrock chat application intelligently utilizes a pre-defined function to retrieve relevant information. This function might be designed to fetch JIRA status updates or customer information from the database. The retrieval is performed through a secure endpoint using API Gateway.
- Secure Access and Lambda Trigger: The chat application authenticates itself with API Gateway to securely access the designated endpoint. This authentication is achieved using a randomly generated API key securely stored in AWS Secrets Manager. Based on the user’s request, the appropriate Lambda function is triggered.
- Action Execution: The Lambda function, now activated, performs the specific actions requested by the user. This involves calling the JIRA API or querying the database with the necessary parameters provided by the agent. The agent is designed to handle a variety of tasks, including:
- Providing a concise overview of a specific customer.
- Listing recent interactions with a particular customer.
- Retrieving the meeting preferences for a designated customer.
- Retrieving a list of open JIRA tickets associated with a specific project.
- Updating the due date for a particular JIRA ticket.
Prerequisites
To follow along with this solution implementation and build your own customer management agent, you’ll need the following prerequisites:
- AWS Account: An active AWS account is essential to access the necessary services.
- SageMaker Unified Studio Access: User access to Amazon Bedrock within SageMaker Unified Studio is required.
- Model Access: You’ll need model access to Amazon Nova Pro on Amazon Bedrock. Ensure this model is available in a supported AWS Region.
- JIRA Setup: A JIRA application, its corresponding JIRA URL, and a JIRA API token associated with your account are necessary for integrating with JIRA.
It’s assumed that you have a basic understanding of fundamental serverless concepts on AWS, including API Gateway, Lambda functions, and IAM Identity Center. While this post won’t provide in-depth definitions of these services, we will demonstrate their use cases in the context of the new Amazon Bedrock features available within SageMaker Unified Studio.
Deploying the Solution
To deploy the customer management agent solution, follow these steps:
- Download Code: Begin by downloading the necessary code from the provided GitHub repository.
- Retrieve JIRA Credentials: Obtain the values for
JIRA_API_KEY_ARN
,JIRA_URL
, andJIRA_USER_NAME
for the Lambda function. These credentials will be used to authenticate with your JIRA instance. - Launch CloudFormation Stack: Utilize the provided AWS CloudFormation template. Refer to the documentation on ‘Create a stack from the CloudFormation console’ for detailed instructions on launching the stack in your preferred AWS Region.
- API Gateway URL: After the CloudFormation stack has been successfully deployed, navigate to the Outputs tab. Locate and note down the
ApiInvokeURL
value. This URL represents the endpoint for your API Gateway. - Secrets Manager Configuration: Access the Secrets Manager console. Find the secrets corresponding to
JIRA_API_KEY_ARN
,JIRA_URL
, andJIRA_USER_NAME
. - Update Secret Values: Choose the Retrieve secret option for each secret. Copy the corresponding variables obtained in Step 2 into the secret plaintext string. This will securely store your JIRA credentials.
- Sign in to SageMaker Unified Studio: Sign in to SageMaker Unified Studio using your organization’s SSO credentials.
Creating a New Project
With the infrastructure in place, let’s create a new project within SageMaker Unified Studio:
- Project Creation: On the SageMaker Unified Studio landing page, initiate the creation of a new project.
- Project Naming: Assign a descriptive name to your project (e.g.,
crm-agent
). - Profile Selection: Choose the Generative AI application development profile and proceed.
- Default Settings: Accept the default settings and continue.
- Confirmation: Review the project configuration and choose Create project to confirm.
Building the Chat Agent Application
Now, let’s construct the core of our solution – the chat agent application:
Chat Agent Initiation: Within the
crm-agent
project landing page, locate the New section on the right-hand side. Choose Chat agent to begin building your application.
This will present a list of configurations for your agent application.Model Selection: Under the model section, select a desired foundation model (FM) supported by Amazon Bedrock. For this
crm-agent
, we’ll choose Amazon Nova Pro.System Prompt Definition: In the system prompt section, provide the following prompt. This prompt will guide the agent’s behavior and responses. You can optionally include examples of user input and model responses to further refine its performance.
You are a customer relationship management agent tasked with helping a sales person plan their work with customers. You are provided with an API endpoint. This endpoint can provide information like company overview, company interaction history (meeting times and notes), company meeting preferences (meeting type, day of week, and time of day). You can also query Jira tasks and update their timeline. After receiving a response, clean it up into a readable format. If the output is a numbered list, format it as such with newline characters and numbers.
Function Creation: In the Functions section, choose Create a new function. This function will define the actions the agent can perform.
Function Naming: Give your function a descriptive name, such as
crm_agent_calling
.Function Schema: For the Function schema, use the OpenAPI definition provided in the GitHub repository. This schema defines the input and output parameters for your function.
Authentication Configuration: For Authentication method, choose API Keys (Max. 2 Keys) and enter the following details:
- For Key sent in, choose Header.
- For Key name, enter
x-api-key
. - For Key value, enter the Secrets Manager API Key.
API Server Endpoint: In the API servers section, input the endpoint URL you obtained from the CloudFormation Outputs (the
ApiInvokeURL
).Function Finalization: Choose Create to finalize the function creation.
Application Saving: In the Functions section of the chat agent application, select the function you just created and choose Save to complete the application creation.
Example Interactions
Let’s explore some practical examples of how this chat agent can be used:
Use Case 1: CRM Analyst Retrieving Customer Details
A CRM analyst can use natural language to retrieve customer details stored in the database. Here are some example questions they might ask:
- ‘Give me a brief overview of customer C-jkl101112.’
- ‘List the last 2 recent interactions for customer C-def456.’
- ‘What communication method does customer C-mno131415 prefer?’
- ‘Recommend optimal time and contact channel to reach out to C-ghi789 based on their preferences and our last interaction.’
The agent, upon receiving these requests, will intelligently query the database and provide the corresponding answers in a clear and concise format.
Use Case 2: Project Manager Managing JIRA Tickets
A project manager can use the agent to list and update JIRA tickets. Here are some example interactions:
- ‘What are the open JIRA Tasks for project id CRM?’
- ‘Please update JIRA Task CRM-3 to 1 weeks out.’
The agent will access the JIRA board, fetch the relevant project information, and provide a list of open JIRA tasks. It will also update the timeline of a specific task as requested by the user.
Clean Up
To prevent incurring unnecessary costs, perform the following clean-up steps:
- Delete CloudFormation Stack: Delete the CloudFormation stack that you deployed earlier.
- Delete Function Component: Remove the function component you created in Amazon Bedrock.
- Delete Chat Agent Application: Delete the chat agent application within Amazon Bedrock.
- Delete Domains: Delete the domains in SageMaker Unified Studio.
Cost
Using Amazon Bedrock within SageMaker Unified Studio doesn’t incur any separate charges. However, you will be billed for the individual AWS services and resources utilized within the service. Amazon Bedrock operates on a pay-as-you-go model, meaning you only pay for the resources you consume, with no minimum fees or upfront commitments.
If you require further assistance with pricing calculations or have questions about optimizing costs for your specific use case, it’s recommended to reach out to AWS Support or consult with your account manager. They can provide tailored guidance based on your needs.
Expanding Functionality with Knowledge Bases (Optional)
While the current solution focuses on direct API interactions, you can significantly enhance the agent’s capabilities by integrating Amazon Bedrock Knowledge Bases. Knowledge Bases allow you to connect your agent to various data sources, such as internal documentation, FAQs, and wikis. This enables the agent to answer a broader range of questions and provide more comprehensive information, even if the specific data isn’t directly accessible through an API.
To add a Knowledge Base:
- Create a Knowledge Base: Within the Amazon Bedrock console, create a new Knowledge Base.
- Configure Data Source: Connect the Knowledge Base to your desired data source (e.g., Amazon S3 bucket containing documents).
- Ingest Data: Ingest the data into the Knowledge Base. This process involves Amazon Bedrock analyzing and indexing the content.
- Associate with Agent: In your chat agent application within SageMaker Unified Studio, associate the newly created Knowledge Base.
Once integrated, the agent can leverage the Knowledge Base to answer questions based on the ingested data. For example, if you have a Knowledge Base containing product documentation, the agent could answer questions like:
- ‘What are the key features of product X?’
- ‘How do I troubleshoot error Y in product Z?’
- ‘What is the warranty period for product A?’
This expands the agent’s capabilities beyond simple data retrieval and allows it to providemore contextual and informative responses.
Guardrails and Responsible AI (Optional)
Amazon Bedrock provides features for implementing guardrails, which are essential for responsible AI development. Guardrails help ensure that your agent behaves in a safe, ethical, and compliant manner. You can define rules and policies to control the agent’s responses, prevent it from generating inappropriate content, and ensure it adheres to your organization’s guidelines.
Consider implementing guardrails to:
- Filter Profanity: Prevent the agent from using or generating offensive language.
- Restrict Topics: Limit the agent’s responses to specific topics or domains.
- Control Tone: Ensure the agent maintains a professional and appropriate tone.
- Prevent Bias: Mitigate potential biases in the agent’s responses.
- Ensure Compliance: Adhere to relevant regulations and industry standards.
By implementing guardrails, you can build a more trustworthy and reliable AI agent that aligns with your organization’s values and ethical principles.
Monitoring and Evaluation (Optional)
Amazon Bedrock offers tools for monitoring and evaluating the performance of your agent. You can track key metrics, such as response accuracy, latency, and user satisfaction. This allows you to identify areas for improvement and continuously refine your agent’s capabilities.
Consider using the evaluation tools to:
- Track User Feedback: Collect user feedback on the agent’s responses.
- Analyze Conversation Logs: Review conversation logs to identify patterns and areas for improvement.
- Measure Response Accuracy: Evaluate the accuracy of the agent’s responses against a ground truth dataset.
- Monitor Latency: Track the time it takes for the agent to respond to user requests.
Regular monitoring and evaluation are crucial for ensuring that your agent continues to meet your organization’s needs and provides a positive user experience. By continuously analyzing its performance, you can identify and address any issues, optimize its responses, and ensure it remains a valuable asset.