-
Notifications
You must be signed in to change notification settings - Fork 83
Description
Track
Creative Apps (GitHub Copilot)
Project Name
AI LLM Score against Enterprise Risk Rubric
GitHub Username
Repository URL
https://github.com/krob527/AITermsScore
Project Description
getAITermsScore is a web application that automatically evaluates the legal documents of AI products — Terms of Service, Privacy Policies, Data Processing Agreements, and Acceptable Use Policies — and produces a structured, scored risk assessment.
The Problem
AI vendors publish Terms of Service, Privacy Policies, Data Processing Agreements, and Acceptable Use Policies that routinely run to 50+ pages of dense legal language. Enterprises and individuals adopting AI tools are expected to accept these documents — yet almost no one reads them. Hidden inside are clauses that grant vendors broad rights to train on your data, limit liability to a fraction of fees paid, give 30-day notice for unilateral changes, and provide no audit rights whatsoever.
Most people click "I Agree" without knowing what they're agreeing to.
What AITermsScore Does
AITermsScore sends an AI agent to read those documents for you. Enter any AI product name in the browser. Within 2–3 minutes, the app returns a structured risk scorecard across 8 legal dimensions — rated 0–5, with citations from the actual documents — so you can compare vendors and understand your real exposure before signing up.
No legal background required. No manual document hunting. Just a score you can act on.
Demo Video or Screenshots
Specify Model and Vendor
Review Score Card
Review Details
Primary Programming Language
Python
Key Technologies Used
The app spins up an Azure AI Foundry Agent backed by GPT-4.1 that searches the web for the vendor's current legal documents using DuckDuckGo, scores them across multiple risk dimensions using a configurable rubric, and streams a live scorecard back to the browser. The result includes per-dimension scores (0–5), rationale, key findings, an overall risk score, and a full written report.
Submission Type
Individual
Team Members
No response
Submission Requirements
- My project meets the track-specific challenge requirements
- My repository includes a comprehensive README.md with setup instructions
- My code does not contain hardcoded API keys or secrets
- I have included demo materials (video or screenshots)
- My project is my own work with proper attribution for any third-party code
- I agree to the Code of Conduct
- I have read and agree to the Disclaimer
- My submission does NOT contain any confidential, proprietary, or sensitive information
- I confirm I have the rights to submit this content and grant the necessary licenses
Quick Setup Summary
Clone the Repo
GitHub CLI
gh repo clone krob527/AITermsScore
HTTPS
https://github.com/krob527/AITermsScore.git
Setup
# 1. Create and activate virtual environment
python -m venv .venv
.venv\Scripts\Activate.ps1
# 2. Install dependencies
pip install -r requirements.txt
# 3. Configure environment
Copy-Item .env.example .env
# Edit .env — see variables table belowEnvironment variables (.env)
| Variable | Required | Where to find it |
|---|---|---|
AZURE_AI_PROJECT_ENDPOINT |
✅ | AI Foundry portal → your project → Overview → Project endpoint · Format: https://<hub>.services.ai.azure.com/api/projects/<project> |
AZURE_AI_MODEL_DEPLOYMENT |
✅ | AI Foundry portal → your project → Deployments → deployment name (e.g. gpt-4.1) |
AGENT_NAME |
optional | Name to register the agent under. Defaults to AITermsScoreAgent |
AGENT_ID |
optional | If set, the app skips all agent API calls and uses this ID directly. Recommended for App Service. |
Run locally
python app.py
# Open http://localhost:5000CLI (optional)
# Score a product
python main.py score "OpenAI ChatGPT"
python main.py score "Google Gemini" --vendor google --timeout 600
# Delete the registered agent from AI Foundry
python main.py delete-agentAzure Deployment
The app deploys to Azure App Service (Linux) using the Azure Developer CLI (azd).
Prerequisites
| Requirement | Notes |
|---|---|
Azure Developer CLI (azd) |
Install guide |
Azure CLI (az) |
az login |
| Contributor role on the target resource group | To create and update resources |
First-time deploy
# Authenticate
az login
azd auth login
# Set required environment values
azd env set AZURE_SUBSCRIPTION_ID <your-subscription-id>
azd env set AZURE_LOCATION eastus
azd env set AZURE_AI_PROJECT_ENDPOINT "https://<hub>.services.ai.azure.com/api/projects/<project>"
azd env set AZURE_AI_MODEL_DEPLOYMENT gpt-4.1
# Provision infrastructure + deploy code
azd upazd up will:
- Create an App Service Plan (B1 Basic, Linux) and a Web App in your resource group
- Enable a System-Assigned Managed Identity on the Web App
- Create Application Insights and a Log Analytics Workspace
- Build the Python app with Oryx and deploy it
Required Azure app settings
These are set automatically by azd up via infra/main.bicep. If you need to update them manually:
az webapp config appsettings set \
--name <web-app-name> \
--resource-group <resource-group> \
--settings \
AZURE_AI_PROJECT_ENDPOINT="https://<hub>.services.ai.azure.com/api/projects/<project>" \
AZURE_AI_MODEL_DEPLOYMENT="gpt-4.1" \
AGENT_ID="<your-agent-id>" \
WEBSITES_PORT="8000" \
SCM_DO_BUILD_DURING_DEPLOYMENT="true"| App Setting | Purpose |
|---|---|
AZURE_AI_PROJECT_ENDPOINT |
AI Foundry project endpoint URL |
AZURE_AI_MODEL_DEPLOYMENT |
Model deployment name |
AGENT_ID |
Pre-registered agent ID — bypasses list_agents / create_agent API calls on startup. Find it in AI Foundry portal → your project → Agents. |
WEBSITES_PORT |
Must be 8000 — tells App Service to route traffic to gunicorn's port |
SCM_DO_BUILD_DURING_DEPLOYMENT |
Must be true — tells Oryx to run pip install at deploy time, not at cold-start |
Required IAM role assignments
The Web App's System-Assigned Managed Identity must have the following roles on your Azure AI Foundry resource:
| Role | Scope | Why |
|---|---|---|
| Azure AI Developer | AI Foundry account or project | Create threads, post messages, run agents |
| Azure AI User | AI Foundry account | Read model deployments |
To assign (replace <principal-id> with the managed identity principal ID from azd up output or the portal):
# Get the resource ID of your AI Foundry account
$accountId = az cognitiveservices account show \
--name <foundry-account-name> \
--resource-group <resource-group> \
--query id -o tsv
# Assign Azure AI Developer
az role assignment create \
--assignee <principal-id> \
--role "Azure AI Developer" \
--scope $accountId
# Assign Azure AI User
az role assignment create \
--assignee <principal-id> \
--role "Azure AI User" \
--scope $accountIdRedeploy after code changes
azd deployTechnical Highlights
I began to steer the AI after seeing it get stuck on several areas. I found myself co-authoring via direction given what appeared to be troubling the app.
I think the presentation is useful in that it provides useful color coding, overall score card and description of details of how each score was reached.
I switched from Claude Sonnet 4.6 to GPT-5.3-Codex after troubleshooting a loop problem seemed to be running us in circles. I also asked to check the Best Practices MCP server to assist in pin pointing the issue and resolving it.
I also had a significant issues trying to get Grounding with Bing Search to work. I pivoted to using Duck Duck Go instead.
Challenges & Learnings
I ran into many challenges given that I have never considered myself a developer. I was happy to learn that the troubleshooting expertise that would have taken inordinate amounts of time to identify the issue (often evaluating relevant logs, which I had no idea where to look), research a potential solution (which In this case could easily have taken days) and verify solutions coded worked (testing is way beyond my skillset). I am still interested in learning more about observability and how to instrument these solutions for optimized operation.
Contact Information
https://www.linkedin.com/in/kevin-robinson-28b493393/
Country/Region
United States


