Sunday, March 15, 2026

I actually have read over 100 AI tenders from large firms. Here is the matrix of guardrails and commitments that emerges

I actually have read over 100 AI tenders from large firms. Here is the matrix of guardrails and commitments that emerges

Companies are adopting generative AI on an enormous scale. We’re improving work and remodeling business processes from merchandising to security operations. And we’re delivering tremendous advantages: increasing productivity, improving quality, and reducing time to market.

With this progress comes the necessity to think about the risks. These include software vulnerabilities, cyberattacks, improper system access, and exposure of sensitive data. There are also ethical and legal considerations, resembling violations of copyright or privacy laws, bias or toxicity in the outcomes generated, the spread of disinformation and deep fakes, and a deepening of the digital divide. We are currently witnessing the worst of those in public life, where algorithms are getting used to spread false information, manipulate public opinion, and undermine trust in institutions. All of this underscores the importance of security, transparency, and accountability when constructing and using AI systems.

Good work is being done! In the US, President Biden’s Executive Order on AI goals to advertise the responsible use of AI and address issues resembling bias and discrimination. The National Institute of Standards and Technology (NIST) has developed a comprehensive framework for the trustworthiness of AI systems. The European Union has proposed the AI ​​Act, a regulatory framework to make sure the ethical and responsible use of AI. And the AI ​​Safety Institute within the UK is working to develop safety standards and best practices for the usage of AI.

The ultimate responsibility for setting common AI guardrails lies with government, but we aren’t there yet. Today now we have a patchwork of policies which can be regionally inconsistent and unable to maintain up with the rapid pace of AI innovation. In the meantime, the responsibility for secure and responsible deployment lies with us: AI vendors and our enterprise customers. We do need a set of guardrails.

A brand new matrix of duties

Forward-thinking firms have gotten proactive. They are forming internal steering committees and oversight groups to define and implement policies in accordance with their legal obligations and ethical standards. I actually have read greater than 100 requests for proposals (RFPs) from these organizations, they usually are Good. They have informed our framework here at Writer for constructing our own trust and safety programs.

One solution to organize our considering is in a matrix with 4 areas of commitment: data, models, systems and operations. We have to assign these to a few responsible parties: vendors, firms and governments.

Guardrails within the “Data” category include data integrity, provenance, privacy, storage, and legal and regulatory compliance. In “Models” they’re transparency, accuracy, bias, toxicity, and misuse. In “Systems” they’re security, reliability, customization, and configuration. And in “Operations” they’re the software development lifecycle, testing and validation, access and other policies (for humans and machines), and ethics.

Within each guardrail category, I like to recommend listing your key commitments, articulating what’s at stake, defining what “good” means, and establishing a measurement system. Each area will look different across vendors, firms, and government entities, but ultimately they must be interlocking and supportive of each other.

I chosen a sample query from our clients’ RFPs and translated each to reveal how each AI guardrail could work.

Pursue Salesperson
Data → Privacy Policy Key query: What data is confidential? Where is it situated? How can it’s disclosed? What are the disadvantages of exposing it? How can it best be protected? RFP Language: Do you anonymize and encrypt confidential data and control access to it?
Pursue Salesperson
Models → Bias Key query: Where are we biased? Which AI systems influence our decisions or outcomes? What is at stake if we get it improper? What does “good” appear to be? What is our tolerance for error? How can we measure ourselves? How can we test our systems over time? RFP Language: Describe the mechanisms and methods you employ to detect and mitigate bias. Describe your method for bias/fairness testing over time.
Pursue Salesperson
System → Reliability Key query: How reliable does our AI system must be? What are the implications if we don’t meet our availability SLA? How can we measure downtime and evaluate the reliability of our system over time? RFP Language: Do you document, practice and measure response plans for AI system downtime, including measuring responses and downtime?
Pursue Salesperson
Business → Ethics Key query: What role do humans play in our AI programs? Do now we have a framework or formula that defines our roles and responsibilities? RFP Language: Does the organization define policies and procedures that outline and differentiate the varied human roles and responsibilities when interacting with or monitoring the AI ​​system?

As we transform the business of generative AI, it’s critical to discover and address the risks related to its implementation. While government initiatives are underway, today the responsibility for using AI safely and responsibly is ours. By proactively implementing AI guardrails across data, models, systems and operations, we are able to reap the advantages of AI while minimizing the harm.

May Habib is CEO and co-founder of Writer.

Other indispensable comments published by Assets:

The opinions expressed in Fortune.com’s commentaries reflect solely the views of their authors and don’t necessarily reflect the opinions and beliefs of Assets.

Latest news
Related news