Pulse Alternative
Equities

Contracting for the Future: How AI Is Reshaping Risk, Responsibility, and Commercial Frameworks


By Jeff Holowaychuk and Michal Jaworski

Last week, we had the privilege of presenting to a group of IT and cybersecurity experts in the higher learning sector at BCNET CONNECT 2026 about important issues to consider when contracting for AI and AI augmented services, as well as emerging trends in the AI space. Here are the key takeaways:

  • Ownership of AI-generated outputs remains unsettled under Canadian intellectual property laws, which generally require a human creator. While the courts and legislators work to resolve these tensions, organizations should carefully consider contractual ownership structures and potential uses of AI outputs to mitigate risk.
  • Data confidentiality remains a key risk issue. It is now common knowledge that disclosing information to public AI models makes that information part of training datasets and vulnerable to reproduction. However, even for enterprise AI solutions, it is important to watch for potential secondary uses, including as training data, for model provider development purposes and other unforeseen applications.
  • Where personal information is involved in any initiative that leverages AI, it is crucial to put appropriate contractual protections in place and conduct a privacy impact assessment to support privacy compliance, identify potential data leakage (such as disclosure to third party model providers) and mitigate privacy risk.
  • Indemnities are becoming more common in AI model provider contracts, moving away from earlier contract structures that placed all liability on end users. As these terms become more market-standard, particularly for outputs and IP infringement, these terms should be reviewed carefully to ensure that the chosen AI model and use case fall under the indemnity.
  • With AI regulation almost certain to emerge in the near future, consider future-proofing contracts with provisions that support adaptation to changes in AI regulation, and provide off-ramps where AI model providers cannot comply with new legal requirements.
  • In professional services engagements where service provider personnel leverage AI tools, contracts should provide for an appropriate allocation of responsibility and liability for AI-generated errors and hallucinations. Organizations may want to directly address potential damages for reputational harm or reduction in value of affected deliverables.
  • The concept of sovereign AI is gaining momentum in Canada and globally, with pushes for locally controlled models with no foreign infrastructure ties.
  • AI agents are here, raising urgent questions about appropriate agent permissions, preventative measures to avoid context loss by agents and the practicality and viability of genuine human oversight.
  • Alignment to established AI risk frameworks, such as NIST’s AI Risk Management Framework or ISO/IEC 23894:2023 relating to AI risk management, will help organizations demonstrate intentional and defensible risk mitigation strategies before AI adoption.

If you have any questions about the above or would like to discuss AI contracting for your organization, please contact Michal or Jeff.



Source link

Related posts

Be cautious on European equities due to Iran war exposure, says Goldman’s Tim Urbanowicz

George

Equities, bonds, and currencies all rise! Energy crisis concerns temporarily alleviated as South Korean assets shift from a ‘war discount’ narrative to AI chip-driven growth stories.

George

Suzlon Share Price: Renewable energy stock falls after 5-day rally amid this clarification by company – India TV News

George

Leave a Comment