Datafication of Automated (Legal) Decisions - or how (not) to install a GPS when law is not precisely a map

Bidragets oversatte titel: Datificering af (retlige) beslutninger: Hvorfor installere en GPS når retten ikke er et kort

Publikation: Konferencebidrag uden forlag/tidsskriftPaperForskning

34 Downloads (Pure)

Resumé

Even though I maintain that it is a misconception to state that states are “no longer” the only actors, since they never were, indeed it makes sense to “shed light on the impact of (…) new tendencies on legal regulatory mechanisms (…)” One regulatory tendency is obviously the automation of (legal) decisions which has implications for legal orders, legal actors and legal research, not to mention legal legitimacy as well as personal autonomy and democracy. On the one hand automation may facilitate better, faster, more predictable and more coherent decisions and leave cumbersome and time consuming calculations to machines. On the other hand automation carries its problems:
Firstly, the decision making may be hidden in algorithms that for most people are inaccessible and incomprehensible. This may undermine personal autonomy as we may believe that we are making genuine decisions whereas in fact a substantial part of the components of the decisions are prefabricated. With a risk of misplacing the responsibility, this may be called the “google syndrome”. The hidden algorithms may also constitute the basis for decisions concerning individuals (the passive aspect), the “profiling syndrome”. Based on big data machines may be able to (or are thought to be able to) make a prediction profile, leaving risks for individuals for being excluded from life and health insurances, being targets for computational policing etc. An additional dimension to the prefabricated decisions is the commercial aspect. Obviously, commercial interests are not illegitimate per se, on the contrary. The problem is the hidden dimension, the fact that commercial interests may have an influence on the algorithms in use, having implications for what is perceived as personal choices and decisions, for decisions and calculations regarding individuals, and perhaps even for decisions in relation to democracy and government.
Secondly, it is questionable whether a (legal) decision is or ought to be entirely computable. The very idea of an automated decision seems to entail that the decision is made within a fixed amount of options, including clear cut categorization of facts, which seems to presume that facts come prepackaged and auto-categorizationable, and which leaves out human creativity and the occurrence of new, unforeseen situations and possibilities. Even though, at times, it may be worthwhile reducing seemingly open ended situations to closed situations with a vast, but technically manageable amount of fixed data – driving cars may be a good example – it may be counterproductive to reduce all situations to categorizationable and foreseeable ones. This automation skepticism hinges on various concepts such as ‘tychism’ (Peirce), ‘fact skepticism’ (Frank), ‘defeasible logic’ (Hart) and ‘communicative action’ (Habermas), which will be engaged in considering the possible limits to automated decision (work in progress). Presently, it may suffice to refer to Montesquieu who held that 1) judges of the lower courts ought to be nothing but inanimated beings, and 2) that the upper court ought to be able to mould the law in favor of the law, thus suggesting that automated decisions should in principle always be subject to human revision.
OriginalsprogEngelsk
Publikationsdato2017
Antal sider10
StatusUdgivet - 2017
BegivenhedINTRAlaw Law in Transition - Aarhus University, Aarhus, Danmark
Varighed: 28 sep. 201729 sep. 2017
http://law.au.dk/forskning/forskergrupper/intralaw/conference-law-in-transition/

Konference

KonferenceINTRAlaw Law in Transition
LokationAarhus University
LandDanmark
ByAarhus
Periode28/09/201729/09/2017
Internetadresse

Fingeraftryk

Law
automation
autonomy
Montesquieu
democracy
life insurance
decision making
communicative action
legal order
health insurance
search engine
creativity
legitimacy
responsibility

Emneord

  • automated decisions
  • human decision making
  • defeasibility
  • procedural justice
  • anthropomorphism
  • mapability of law

Citer dette

@conference{dda735d441aa43349df4f80d9ffc2e25,
title = "Datafication of Automated (Legal) Decisions - or how (not) to install a GPS when law is not precisely a map",
abstract = "Even though I maintain that it is a misconception to state that states are “no longer” the only actors, since they never were, indeed it makes sense to “shed light on the impact of (…) new tendencies on legal regulatory mechanisms (…)” One regulatory tendency is obviously the automation of (legal) decisions which has implications for legal orders, legal actors and legal research, not to mention legal legitimacy as well as personal autonomy and democracy. On the one hand automation may facilitate better, faster, more predictable and more coherent decisions and leave cumbersome and time consuming calculations to machines. On the other hand automation carries its problems: Firstly, the decision making may be hidden in algorithms that for most people are inaccessible and incomprehensible. This may undermine personal autonomy as we may believe that we are making genuine decisions whereas in fact a substantial part of the components of the decisions are prefabricated. With a risk of misplacing the responsibility, this may be called the “google syndrome”. The hidden algorithms may also constitute the basis for decisions concerning individuals (the passive aspect), the “profiling syndrome”. Based on big data machines may be able to (or are thought to be able to) make a prediction profile, leaving risks for individuals for being excluded from life and health insurances, being targets for computational policing etc. An additional dimension to the prefabricated decisions is the commercial aspect. Obviously, commercial interests are not illegitimate per se, on the contrary. The problem is the hidden dimension, the fact that commercial interests may have an influence on the algorithms in use, having implications for what is perceived as personal choices and decisions, for decisions and calculations regarding individuals, and perhaps even for decisions in relation to democracy and government.Secondly, it is questionable whether a (legal) decision is or ought to be entirely computable. The very idea of an automated decision seems to entail that the decision is made within a fixed amount of options, including clear cut categorization of facts, which seems to presume that facts come prepackaged and auto-categorizationable, and which leaves out human creativity and the occurrence of new, unforeseen situations and possibilities. Even though, at times, it may be worthwhile reducing seemingly open ended situations to closed situations with a vast, but technically manageable amount of fixed data – driving cars may be a good example – it may be counterproductive to reduce all situations to categorizationable and foreseeable ones. This automation skepticism hinges on various concepts such as ‘tychism’ (Peirce), ‘fact skepticism’ (Frank), ‘defeasible logic’ (Hart) and ‘communicative action’ (Habermas), which will be engaged in considering the possible limits to automated decision (work in progress). Presently, it may suffice to refer to Montesquieu who held that 1) judges of the lower courts ought to be nothing but inanimated beings, and 2) that the upper court ought to be able to mould the law in favor of the law, thus suggesting that automated decisions should in principle always be subject to human revision.",
keywords = "automated decisions, human decision making, defeasibility, procedural justice, anthropomorphism, mapability of law",
author = "Sten Schaumburg-M{\"u}ller",
note = "Paper under Legal Informatics Project; null ; Conference date: 28-09-2017 Through 29-09-2017",
year = "2017",
language = "English",
url = "http://law.au.dk/forskning/forskergrupper/intralaw/conference-law-in-transition/",

}

Schaumburg-Müller, S 2017, 'Datafication of Automated (Legal) Decisions - or how (not) to install a GPS when law is not precisely a map' Paper fremlagt ved INTRAlaw Law in Transition , Aarhus, Danmark, 28/09/2017 - 29/09/2017, .

Datafication of Automated (Legal) Decisions - or how (not) to install a GPS when law is not precisely a map. / Schaumburg-Müller, Sten.

2017. Afhandling præsenteret på INTRAlaw Law in Transition , Aarhus, Danmark.

Publikation: Konferencebidrag uden forlag/tidsskriftPaperForskning

TY - CONF

T1 - Datafication of Automated (Legal) Decisions - or how (not) to install a GPS when law is not precisely a map

AU - Schaumburg-Müller, Sten

N1 - Paper under Legal Informatics Project

PY - 2017

Y1 - 2017

N2 - Even though I maintain that it is a misconception to state that states are “no longer” the only actors, since they never were, indeed it makes sense to “shed light on the impact of (…) new tendencies on legal regulatory mechanisms (…)” One regulatory tendency is obviously the automation of (legal) decisions which has implications for legal orders, legal actors and legal research, not to mention legal legitimacy as well as personal autonomy and democracy. On the one hand automation may facilitate better, faster, more predictable and more coherent decisions and leave cumbersome and time consuming calculations to machines. On the other hand automation carries its problems: Firstly, the decision making may be hidden in algorithms that for most people are inaccessible and incomprehensible. This may undermine personal autonomy as we may believe that we are making genuine decisions whereas in fact a substantial part of the components of the decisions are prefabricated. With a risk of misplacing the responsibility, this may be called the “google syndrome”. The hidden algorithms may also constitute the basis for decisions concerning individuals (the passive aspect), the “profiling syndrome”. Based on big data machines may be able to (or are thought to be able to) make a prediction profile, leaving risks for individuals for being excluded from life and health insurances, being targets for computational policing etc. An additional dimension to the prefabricated decisions is the commercial aspect. Obviously, commercial interests are not illegitimate per se, on the contrary. The problem is the hidden dimension, the fact that commercial interests may have an influence on the algorithms in use, having implications for what is perceived as personal choices and decisions, for decisions and calculations regarding individuals, and perhaps even for decisions in relation to democracy and government.Secondly, it is questionable whether a (legal) decision is or ought to be entirely computable. The very idea of an automated decision seems to entail that the decision is made within a fixed amount of options, including clear cut categorization of facts, which seems to presume that facts come prepackaged and auto-categorizationable, and which leaves out human creativity and the occurrence of new, unforeseen situations and possibilities. Even though, at times, it may be worthwhile reducing seemingly open ended situations to closed situations with a vast, but technically manageable amount of fixed data – driving cars may be a good example – it may be counterproductive to reduce all situations to categorizationable and foreseeable ones. This automation skepticism hinges on various concepts such as ‘tychism’ (Peirce), ‘fact skepticism’ (Frank), ‘defeasible logic’ (Hart) and ‘communicative action’ (Habermas), which will be engaged in considering the possible limits to automated decision (work in progress). Presently, it may suffice to refer to Montesquieu who held that 1) judges of the lower courts ought to be nothing but inanimated beings, and 2) that the upper court ought to be able to mould the law in favor of the law, thus suggesting that automated decisions should in principle always be subject to human revision.

AB - Even though I maintain that it is a misconception to state that states are “no longer” the only actors, since they never were, indeed it makes sense to “shed light on the impact of (…) new tendencies on legal regulatory mechanisms (…)” One regulatory tendency is obviously the automation of (legal) decisions which has implications for legal orders, legal actors and legal research, not to mention legal legitimacy as well as personal autonomy and democracy. On the one hand automation may facilitate better, faster, more predictable and more coherent decisions and leave cumbersome and time consuming calculations to machines. On the other hand automation carries its problems: Firstly, the decision making may be hidden in algorithms that for most people are inaccessible and incomprehensible. This may undermine personal autonomy as we may believe that we are making genuine decisions whereas in fact a substantial part of the components of the decisions are prefabricated. With a risk of misplacing the responsibility, this may be called the “google syndrome”. The hidden algorithms may also constitute the basis for decisions concerning individuals (the passive aspect), the “profiling syndrome”. Based on big data machines may be able to (or are thought to be able to) make a prediction profile, leaving risks for individuals for being excluded from life and health insurances, being targets for computational policing etc. An additional dimension to the prefabricated decisions is the commercial aspect. Obviously, commercial interests are not illegitimate per se, on the contrary. The problem is the hidden dimension, the fact that commercial interests may have an influence on the algorithms in use, having implications for what is perceived as personal choices and decisions, for decisions and calculations regarding individuals, and perhaps even for decisions in relation to democracy and government.Secondly, it is questionable whether a (legal) decision is or ought to be entirely computable. The very idea of an automated decision seems to entail that the decision is made within a fixed amount of options, including clear cut categorization of facts, which seems to presume that facts come prepackaged and auto-categorizationable, and which leaves out human creativity and the occurrence of new, unforeseen situations and possibilities. Even though, at times, it may be worthwhile reducing seemingly open ended situations to closed situations with a vast, but technically manageable amount of fixed data – driving cars may be a good example – it may be counterproductive to reduce all situations to categorizationable and foreseeable ones. This automation skepticism hinges on various concepts such as ‘tychism’ (Peirce), ‘fact skepticism’ (Frank), ‘defeasible logic’ (Hart) and ‘communicative action’ (Habermas), which will be engaged in considering the possible limits to automated decision (work in progress). Presently, it may suffice to refer to Montesquieu who held that 1) judges of the lower courts ought to be nothing but inanimated beings, and 2) that the upper court ought to be able to mould the law in favor of the law, thus suggesting that automated decisions should in principle always be subject to human revision.

KW - automated decisions

KW - human decision making

KW - defeasibility

KW - procedural justice

KW - anthropomorphism

KW - mapability of law

M3 - Paper

ER -