Introduction
Working with Microsoft Copilot Studio often requires creating dynamic, interactive user experiences. One powerful feature that enables this is the integration of Adaptive Cards with dynamic JSON content generation. In this post, I'll walk you through a real-world implementation where we needed to generate radio buttons dynamically based on items returned from a RAG (Retrieval-Augmented Generation) system.
The Challenge
Our use case involved creating a conversational interface where users could select from a list of options that weren't static. The radio button choices needed to be generated dynamically based on the results returned from our RAG implementation. This meant we couldn't rely on pre-built, static Adaptive Cards.
The Solution: Dynamic Variable Integration
To solve this challenge, we leveraged Microsoft Copilot Studio's Dynamic Variable functionality combined with Adaptive Card Templates. Here's how we implemented it:
Step 1: Setting Up the Adaptive Card Template
In Copilot Studio, we configured an Adaptive Card Template using the `AdaptiveCardTemplate` object with placeholder expressions:
```yaml
- kind: SendActivity
id: sendActivity_dinNxr
displayName: Dynamic Radio Button
activity:
text:
- "{Text(Topic.KBAresponse.answer)}"
attachments:
- kind: AdaptiveCardTemplate
cardContent: =Text(Topic.Var1RadioButton)
```
The key here is the `cardContent: =Text(Topic.Var1RadioButton)` line, which references our dynamic variable that contains the JSON structure for our Adaptive Card.
Step 2: API Integration for Dynamic Content
The dynamic variable `Topic.Var1RadioButton` gets populated through an HTTP Node in Copilot Studio, which makes an API call to a Python-hosted FastAPI endpoint. This approach allows us to generate the Adaptive Card structure programmatically based on the current context and data.
Step 3: FastAPI Implementation
Here's the Python FastAPI implementation that generates our dynamic Adaptive Card content:
```python
from fastapi import APIRouter, Request
from fastapi.responses import PlainTextResponse
import json
router = APIRouter()
@router.post("/generate-adaptive-card")
async def generate_adaptive_card(request: Request):
payload = await request.json()
solution_ids_used = payload.get("SolutionIDsUsed", [])
used_ids = payload.get("UsedIDs", [])
# Generate your adaptive card JSON structure here
# adaptive_card = {...} # Your card logic
# Return escaped string (for plain text transport of JSON)
escaped_json = json.dumps(adaptive_card)
return PlainTextResponse(content=escaped_json)
```
Critical Implementation Details
Return Type Matters
One crucial aspect of this implementation is the return type from your API endpoint. The response **must be of type Plain Text**, not JSON. If you return a JSON response directly, Copilot Studio's workflow screen will throw rendering errors.
This is why we use:
```python
return PlainTextResponse(content=escaped_json)
```
Instead of returning the JSON object directly.
Dynamic Variable Population
The HTTP Node in Copilot Studio should be configured to:
1. Make a POST request to your FastAPI endpoint
2. Send the necessary context data (like `SolutionIDsUsed` and `UsedIDs`)
3. Store the response in your dynamic variable (`Topic.Var1RadioButton`)
Conclusion
Dynamic Adaptive Cards in Microsoft Copilot Studio open up powerful possibilities for creating responsive, data-driven conversational experiences. By combining Dynamic Variables with external API endpoints, you can create truly dynamic user interfaces that adapt to your users' needs and context.
My AI-DevSecOps Blogs
Thursday, July 31, 2025
Microsoft Copilot Studio: How to Generate Dynamic Adaptive Cards Contents
Friday, July 11, 2025
An error has occurred. Error code: HttpRequestFailure Conversation Id: e3QGxxxxxx01NA-us Time (UTC): 2025-06-01T16:14:05.372Z.
Troubleshooting MS Copilot Studio HTTP Request Failures: A Practical Solution
The Problem Description
If you're working with MS Copilot Studio, you've likely encountered this frustrating error:
MS Copilot Studio --
An error has occurred. Error code: HttpRequestFailure Conversation Id: e3QGxxxxxx01NA-us Time (UTC): 2025-06-01T16:14:05.372Z.
This is a common issue that occurs frequently with MS Copilot Studio, though not consistently. Based on my experience, this isn't a code issue but rather a configuration issue related to the MS environment.
What We Discovered
When we enabled the Continue on Error feature in Copilot Studio, we encountered an HTTP 408 error (Request Timeout). This means the server didn't receive a complete request from the client within the expected timeframe.
After researching and contacting our support counterpart, they provided the following stacktrace:
CorrelationId: xxxx-b674-4c2a-xxxx-184197387367
Exception: Microsoft.IdentityModel.S2S.S2SAuthenticationException: S2xx2099:
An exception has been caught while validating the request.
Exception: [PII of type 'System.AggregateException' is hidden]
---> System.AggregateException: S2xx2096: Microsoft.IdentityModel.S2S.JwtAuthenticationHandler
caught exceptions when validating the token. See AuthenticationResult.InboundPolicyEvaluationResults
for additional details. (S2xx2086: An exception has been caught while validating the request
applying the policy with id : 'User'.
Exception: Microsoft.IdentityModel.Tokens.SecurityTokenInvalidAudienceException: IDXxx214:
Audience validation failed. Audiences: 'xxx-xxx-4af1-b9a8-09a648fb6699'.
Did not match: validationParameters.ValidAudience: 'null' or validationParameters.ValidAudiences:
Unfortunately, the issue remains unresolved as of when I write this post.
Our Temporary Solution: Retry Logic
As a quick fix, we implemented retry logic in the Copilot Studio workflow UI using the Goto Step feature for HTTP nodes, with a maximum retry count of 3.
Implementation Code
Here's the complete code snippet for our temporary fix:
1. Initialize Retry Counter
- kind: SetVariable
id: setVariable_btVmH4
displayName: retryCount
variable: Topic.retryCount
value: 0
2. HTTP Request with Error Handling
- kind: HttpRequestAction
id: JDNAtp
displayName: PF - HTTP Request
method: Post
url: https://pf-end-point-autoendpoint.eastus.inference.ml.azure.com/score
headers:
Authorization: Bearer 6mIiASFPiZKeaRxxUk4JQQJ99BGAAAAAAAAAAAAINFRAZML2PC6
azureml-model-deployment: auto-20250708-551705
Content-Type: application/json
body:
kind: JsonRequestContent
content: |
={
question:Topic.UserQuery,
chat_history:Global.VarHistory
}
errorHandling:
kind: ContinueOnErrorBehavior
statusCode: Topic.ErrorStatusCode
requestTimeoutInMilliseconds: 30000
response: Topic.KBAresponse
responseSchema: Any
responseHeaders: Topic.ResponseHeader
3. Error Status Logging
- kind: SendActivity
id: sendActivity_dxnBNR
activity: After HTTP request --- Error Code--- {Topic.ErrorStatusCode}
4. Retry Logic Implementation
- kind: ConditionGroup
id: conditionGroup_cN6dbL
conditions:
- id: conditionItem_xH0b5H
condition: =!IsBlank(Topic.ErrorStatusCode)
displayName: Condition to try Retry Logic
actions:
- kind: SetVariable
id: setVariable_9Lszte
variable: Topic.retryCount
value: =Topic.retryCount + 1
- kind: ConditionGroup
id: conditionGroup_UA3fsX
conditions:
- id: conditionItem_itQSEP
condition: =Topic.retryCount < 3
actions:
- kind: SendActivity
id: sendActivity_Y0iihp
activity: ---->{Topic.retryCount}---
- kind: GotoAction
id: S4S1LT
actionId: JDNAtp
- kind: SendActivity
id: sendActivity_vOX9b5
activity: Errorred --{Topic.retryCount}
How It Works
- Initialize a retry counter to 0
- Execute the HTTP request with error handling enabled
- Check if an error occurred (ErrorStatusCode is not blank)
- Increment the retry counter
- Retry up to 3 times using the GotoAction to jump back to the HTTP request
- Log the final error state if all retries fail
Key Takeaways
- This appears to be an authentication/token validation issue on Microsoft's end
- The Continue on Error feature is essential for implementing retry logic
- Retry logic provides a practical workaround while waiting for Microsoft to resolve the underlying issue
- The GotoAction feature in Copilot Studio makes implementing retry patterns straightforward
Conclusion
While this isn't a permanent solution, it significantly improves the reliability of HTTP requests in MS Copilot Studio workflows. If you're experiencing similar issues, consider implementing this retry pattern until Microsoft addresses the root cause.
Have you encountered similar issues with MS Copilot Studio? Share your experiences and solutions in the comments below!
Tags: #MSCopilotStudio #Azure #HTTPErrors #RetryLogic #Troubleshooting #Microsoft
Sunday, April 6, 2025
Deploy the Prompt Flow Code to Azure AI Foundry : A step by step guide
Introduction:
Azure AI Foundry, combined with AZ Prompt Flow, provides a robust framework for building and deploying AI-driven applications. By following this guide, you can efficiently develop AI workflows, deploy them to a managed online endpoint using either the Azure ML CLI or the Python SDK, and integrate them with various services for real-time inference.
Key Benefits of Azure AI Foundry
- Fully managed: Azure handles the infrastructure and management of your AI deployment, reducing operational overhead
- Optimized AI workflows: Streamlined processes for developing, deploying, and managing AI models, with built-in automation features
- Built-in inferencing: Native capabilities for running trained models and generating predictions from new data
- Seamless AI model integration: Easily incorporate your own models or pre-trained ones into the Foundry environment
Once your Prompt Flow is tested and validated, you can deploy it to Azure AI Foundry for real-time inferencing.
Inferencing is the process of applying new input data to a machine learning model to generate outputs.
Deployment Approaches
As told before, there are two primary methods for deploying your Prompt Flow to Azure AI Foundry:
1. Using Azure ML CLI
2. Using Python SDK:
In this blog, I will explain the approach #1: Using Azure ML CLI.
Step-by-Step Process for Azure ML CLI Deployment
The following steps will guide you through deploying a flow as a model in Azure ML, creating an online endpoint, and configuring deployments. This assumes you have tested your flow locally and set up all necessary dependencies, including the Azure ML workspace and required connections.
Azure ML CLI Approach
Pre-requisite :
1 Install Azure CLI and ML extension
>az extension add --name ml --yes
Use the below CLI to validate AZ ML is installed correctly
>az extension show --name ml
2 Make sure you have created the connection used in the flow in your Azure ML workspace
Deployment Steps
1. Registering a Machine Learning Flow as a Model in Azure ML
This command registers a defined machine learning flow as a managed model within Azure ML. This enables versioning, deployment, and MLOps capabilities for the flow.
Define the model metadata in a model.yaml file.
This file describes the model's name, version, and location.
> az ml model create --file honda-prod-model.yaml
Sample YAML for reference :
2. Creating an Online Endpoint for Real-time Inference
> az ml online-endpoint create --file honda-prod-endpoint.yaml
This command registers the endpoint with Azure ML and provisions the necessary infrastructure to handle incoming requests.
The honda-prod-endpoint.yaml file contains all the configuration details for your endpoint, including the name, authentication mode, and compute specifications.
After the endpoint is successfully created, you'll receive a response with the endpoint details, including its scoring URI. You can then proceed with deploying your model to this endpoint and configuring traffic distribution to optimize performance.
Sample YAML for reference :
3. Creating an Online Deployment in Azure ML
An online deployment in Azure Machine Learning (Azure ML) is a containerized environment where your model runs and serves real-time predictions. Each deployment is associated with an online endpoint, and multiple deployments can be managed under the same endpoint to support A/B testing, versioning, or gradual rollouts.
Deploying with 0% Traffic
To create a deployment without initially routing any traffic to it, use the following command:
>az ml online-deployment create --file honda-deployment.yaml
Deploying with 100% Traffic
Once the deployment is verified and tested, you can direct all traffic to it using:
>az ml online-deployment create --file honda-deployment.yaml --all-traffic
Sample YAML for reference :
Test the Deployed Model by Invoking the Endpoint
Use the below command to test whether the deployments are working:
>az ml online-endpoint invoke --name honda-chat-endpoint --request-file sample-request.json
Here's an example of what your sample-request.json might look like:
{
"input_data": {
"input_string": "What are the maintenance intervals for a 2024 Honda Civic?"
}
}
You can also test the endpoint using other tools like Postman or curl. When using these tools, you'll need:
The scoring URI (available from the endpoint details)
An authentication key or token
Properly formatted request payload
Conclusion
Using the Azure ML CLI provides a straightforward way to deploy your Prompt Flow to Azure AI Foundry. The command-line approach offers flexibility and can be easily incorporated into CI/CD pipelines for automated deployments.
Reference : https://github.com/Azure/azureml-examples
Sunday, March 30, 2025
Deploying Azure ML Prompt Flow to Azure as App Service: A step by step guide
Introduction
When exploring deployment options for my recent Azure ML Prompt Flow project, I found that Azure App Service offered the perfect balance of simplicity and functionality. This approach stood out for its ability to get AI applications into production quickly with minimal overhead.
My Integration Scenario
My specific use case centered around having Microsoft Copilot Studio handle all UI orchestration, using Direct Line API to connect via BotFramework-WebChat. Microsoft Copilot Studio effectively communicates with the backend deployed Azure Prompt Flow through REST API workflow tasks. The BotFramework-WebChat implementation was architected around a Redux state management pattern, ensuring efficient data flow and a responsive, dynamic user experience.
Solution Architecture
Architecture Diagram - Azure ML Prompt Flow with App Service
Why Choose Azure App Service for Prompt Flow?
- Quick Deployment: Get your AI flows into production faster
- Minimal Infrastructure Management: Focus on your application, not infrastructure
- Perfect for Smaller to Medium-Scale Applications: Right-sized solution
- Simple Yet Scalable Solution: Start small and scale as needed
Step-by-Step Deployment Guide
Pre-requisites and Environment Setup
Before starting the deployment process, ensure you have the following prerequisites installed and configured:
- Python 3.12 or higher
- PowerShell or Git-Bash
- Azure CLI
- Docker Desktop (installed and configured)
- Conda Virtual Environment (created and activated)
- Prompt Flow Package (installed via pip)
Local Setup: Building and Testing Azure ML Prompt Flows
To begin, I cloned the Microsoft Prompt Flow repository as the foundation for my local development environment. Building directly on this codebase allowed me to leverage the existing Prompt Flow CLI and core functionalities.
Repository: https://github.com/microsoft/promptflow.git
Within this cloned repository, I created my specific Prompt Flow in VS Code, tailoring it to my application's requirements. To ensure proper functionality before deployment, I followed these steps for local testing:
1. Connection Setup: I established the necessary connections, such as the Azure OpenAI connection, using the Prompt Flow CLI:
```bash
pf connection create --file .\honda-PROD\azure_openai_connection.yaml
```
This step ensured that my flow could access the required AI models and services without issues.
2. Local Flow Serving: I then served my Prompt Flow locally using the `pf flow serve` command:
```bash
pf flow serve --source .\honda-PROD\ --port 8085 --host localhost
```
This allowed me to access my flow via `http://localhost:8085/` for immediate testing and iteration.
Preparing for Azure App Service Deployment
Build and Deploy the FLOW Setup
Steps:
1. Login to Azure portal using CLI
```bash
az login # Authenticate yourself
2. Create Resource Group in the Azure Portal
```bash
az group create --name rg-for-honda-pf-app-service --location eastus2
az group list --output table/json
```
3. Create Container Registry in the Azure Portal
```bash
az acr create \
--resource-group <resource-group-name> \
--name <container-registry-name> \
--sku <sku> \
--location <region>
```
Handy Sample AZ-CLI
```bash
az acr create --name mycrforpf --resource-group rg-for-honda-pf-app-service --sku ASP-P0v3-1 --location eastus2
az acr update --name mycrforpf --admin-enabled true
```
4. Build the FLOW as docker format app
Use the below command to build a flow as a docker format app:
```bash
pf flow build --source ../../flows/standard/web-classification --output dist --format docker
```
This will generate the DockerFile for you inside the dist folder.
5. Deploy the FLOW to Azure Portal as App Service
The code provided by Microsoft is available in:
- `/examples/tutorials/flow-deploy/azure-app-service/deploy.sh` (Bash Version)
- `/examples/tutorials/flow-deploy/azure-app-service/deploy.ps1` (PowerShell Version)
Use the above deploy script to build and deploy the image.
Testing the Deployed Flow
Once your flow is deployed to Azure App Service, you can test it by sending a POST request to the endpoint or by browsing the test page.
To test the flow:
1. Use a REST client like Postman or CURL to send a POST request to your endpoint
- Sample endpoint: `https://honda-pf-99d9m.azurewebsites.net/score`
- Make sure to set the Content-Type header to `application/json`
- Include your request payload in the body of the POST request
2. **Test via the built-in test page** that comes with the deployment
- Access the test page by navigating to your App Service URL in a browser
- This provides a simple interface to test your flow without additional tools
The deployed flow exposes an API endpoint that follows the same interface patterns as when you test locally, making it straightforward to transition from development to production.
Conclusion
By following these steps, I was able to successfully deploy my Azure ML Prompt Flow to Azure App Service, creating a robust and scalable solution for my AI application. This approach provided the perfect balance of simplicity and functionality, allowing me to get my application into production quickly with minimal overhead.
The combination of Microsoft Copilot Studio for UI orchestration and Azure App Service for backend deployment created a powerful and flexible architecture that can be adapted to a variety of AI application scenarios.
Join the Conversation: Together We Learn
If you've found a better way to handle certain aspects of the deployment, please share your insights - I'm always looking to improve this workflow!
Saturday, March 29, 2025
Unleash Your LLM Potential: Deploying an Azure ML Prompt Flow: A Practical Guide
Introduction
In the rapidly evolving landscape of Large Language Models (LLMs), efficiently deploying your AI applications is crucial. Today, I want to share my recent exploration into deploying Azure Machine Learning Prompt Flows, a powerful tool for streamlining the entire LLM application development lifecycle.
What is Azure ML Prompt Flow?
Azure ML Prompt Flow is more than just a development tool, it's a complete ecosystem designed to streamline the entire lifecycle of AI application development.
Think of it as an orchestrator that lets you chain together prompts, Python scripts, data sources, and evaluation metrics in a structured pipeline.
My Recent Project: Diving into Deployment
Azure ML Prompt Flow promised to simplify the process, and I was eager to see if it lived up to the hype. It definitely did! It provided a structured way to build, test, and iterate on my LLM-powered application. My goal was simple: transform a promising AI prototype into a robust, production-ready application.
My Journey Through Azure Prompt Flow Deployment Strategies
Deploying an AI application is no longer a one-size-fits-all endeavor. My recent project with Azure Prompt Flow illuminated the complexity and flexibility of modern AI deployment strategies. Drawing directly from Microsoft's official documentation, I'll break down the four primary deployment approaches that can transform your AI project from a prototype to a production-ready solution.
Deployment Approaches: A Deep Dive
These are the four available approaches recommended by Microsoft:
**1. Deploy to Azure App Service**
This method offers a fully managed platform for hosting web applications. This approach is particularly compelling for developers seeking:
* Rapid deployment
* Minimal infrastructure management
* Easy scaling capabilities
* Simplified web application hosting
**2. Deploy a flow using Docker**
Docker provides containerization, enabling you to package your Prompt Flow and its dependencies into a portable container. Containerization through Docker offers unprecedented consistency and portability for your Prompt Flow applications:
The key capabilities or features
* Package entire application environment
* Ensure consistency across development and production
* Simplify dependency management
* Enable seamless migration between different infrastructure
**3. Deploy a flow using Kubernetes**
For applications demanding maximum scalability and reliability, Kubernetes emerges as the gold standard:
The Key Benefits:
* Advanced container orchestration
* Automatic scaling and load balancing
* High availability architecture
* Complex microservice management
**4. Deploy the Prompt Flow code to Azure AI Foundry**
Microsoft's Azure AI Foundry represents the next evolution in AI application deployment:
Azure AI Foundry is a newer offering that helps to provide a development environment that makes it easier to create and share AI solutions.
This option is great for collaborative development and sharing of AI solutions.
The key capabilities or features
* Integrated AI development environment
* Streamlined model management
* Enhanced collaboration tools
* Comprehensive AI solution lifecycle support
Choosing Your Deployment Strategy
Selecting the right approach depends on multiple factors:
**Project Complexity:**
* Simple web app → Azure App Service
* Consistent environment needs → Docker
* Enterprise-scale applications → Kubernetes
* Collaborative AI development → Azure AI Foundry
**Other Factors:**
* Scalability Requirements
* Team Expertise
* Infrastructure Constraints
* Performance Expectations
My two cents:
If you're looking to deploy Azure ML Prompt Flows, don't be afraid to experiment. Choose a deployment method that aligns with your needs and comfort level, and be prepared to get your hands dirty. The learning experience is worth it!
I'd love to hear about your experiences deploying Prompt Flows! Share your stories and tips in the comments below.
Happy deploying!!!
Thursday, March 6, 2025
Webchat.JS / DirectLine API - how to clear the chat bot previous messages
store.getState().activities = [];
For more details, please follow this post,
https://stackoverflow.com/questions/79487310/webchat-js-direct-line-api-copilot-studio-reset-the-chat-bot-messages
Wednesday, April 24, 2024
Elastic Search - Error -- curl: (52) Empty reply from server
in windows Curl command , the below bulk import will throw the exception
C:\elasticsearch-8.12.2>curl -XPOST "https://localhost:9200/products/_bulk" -H "Content-Type: application/json" --data-binary "@products-bulk.json"
Error: curl: (52) Empty reply from server
Solution :
Use the cacert and -insecure and -u uid:pwd added to your curl command.
I tested and is working fine for me in Windows
curl --cacert config/certs/http_ca.crt --insecure -u elastic:l9RyDpzFJ7TBwhC06e9E -XPOST "https://localhost:9200/products/_bulk" -H "Content-Type: application/json" --data-binary "@products-bulk.json"