Rules as Code help computers in the law and regulation

0
608

The OECD published a report on Rules as Code efforts around the world. The Australian Senate Select Committee on Financial Technology and Regulatory Technology will be accepting the report submitted on the matter until 11 December 2020. 

Machines cannot read and reply to rules that are verbalized in human language. To make rules that can read and to take action by the machines, the version of the rules should be coded. The coding of legal rules is not new over the past five decades, Artificial Intelligence (AI), as well as law researchers, have produced a range of formally coded versions of tax and other laws. Over the past decade Data 61 (the data science arm of the CSIRO), developed a way to re-imagine regulation as an open platform based on digital logic.

The coding of legal rules is complex, as rules written in human language are not outlined with coding in mind. Broad rules are difficult to interpret and to apply to specific cases. Rules as Code means that drafters and coders develop legal rules together producing a human language text and an official coded version. Rules as codes have efficiency benefits but it may also lead to a loss of flexibility in the way laws are interpreted. Interpretation of law is carried out by various stakeholders and the final authority in court. The coded version created during drafting may be rigid to respond properly and fairly to unforeseen cases. Code should be balanced by a deep knowledge of structural risk. Rules as Code assume the law, and the role of government remains the same as in the 20th century.

Technology is changing law and empowering people and other entities. Citizens use technology in every area of their lives, and they have the basic right to interpret, use, and respond to rules in a way that is consistent with the law. To develop AI solutions that interpret and code legal rules with sophistication and transparency, advancing the objective of the rule while supporting the rights of individuals. It’s a future vision that requires, among others, the development of mechanisms to determine when to interact with human regulators and domain experts, and institutions that would ensure the integrity of the outcomes.