StutiShahFirstEssay 3 - 09 Jan 2022 - Main.StutiShah
|
|
META TOPICPARENT | name="FirstEssay" |
| |
< < | ADDRESSING ETHICAL CHALLENGES IN AI | > > | ADDRESSING THE REGULATORY VACUUM ON ARTIFICIAL INTELLIGENCE IN INDIA | | | |
< < | -- By StutiShah - 22 Oct 2021 | > > | -- By StutiShah - 22 Oct 2021 (Revised 8 Jan 2022) | | | |
< < | Background and Context | > > | | | | |
< < | With an explosion in the volume of commodifiable data, availability of inexpensive computational power, and investment in technology, machine learning that uses pattern recognition and autonomous software-based decision-making known as 'Artificial intelligence' (AI) is making inroads into every facet of our lives. Though the Government and large corporates in India have commenced leveraging the benefits of AI across sectors, including insurance and banking, in an unregulated environment such as India's, AI could prove to be counter-productive. Since the Government is not able to accurately gauge the impact of AI, it has not regulated it as of yet, though it has on several occasions expressed the importance to do so. | > > | With an explosion in the volume of commodifiable data, availability of inexpensive computational power, and investment in technology, machine learning that uses pattern recognition and autonomous software-based decision-making, known as 'Artificial intelligence' (AI), is making inroads into every facet of our lives. Though the Government and large corporates in India have commenced leveraging the benefits of AI across sectors, in an unregulated environment such as India's, AI could prove to be counter-productive. | | | |
< < | Data Privacy | > > | Data Privacy in an AI Regime | | | |
< < | Since data primarily fuels AI, big corporates have been exploiting the loopholes in India's toothless data protection law, while collecting and processing personal data. The current data protection law does not require consent for collecting and processing of personal data, except for a very few types of personal data. Therefore, when a new personal data protection bill (PDP Bill) was published by the Indian Government in 2019, I was hopeful that it would address this vacuum. Though it is significantly more nuanced and favorable to data subjects than the prevailing law on data protection, I was disappointed that it had retained many of its drawbacks. I have identified the drawbacks that are most detrimental to data subjects in an AI-regime below. | > > | As aforementioned, there is currently no robust legislation governing AI in India. Since data primarily fuels AI, big corporates have been exploiting the loopholes in India's toothless data protection law, while collecting and processing personal data. Therefore, when a new personal data protection bill (PDP Bill) was published by the Indian Government in 2019, I was hopeful that it would address this vacuum. Though it is more nuanced and favorable to citizens than the prevailing law on data protection, I was disappointed that it had retained many of its drawbacks. Subsequently, a Joint Parliamentary Committee proposed recommendations to the PDP Bill on December 16, 2021 (Recommendations), which were even more problematic. | | | |
< < | First, even though the Indian Supreme Court has recognised decisional autonomy as an integral aspect of the right to privacy in the Indian Constitution, the PDP Bill is myopic by not protecting data subjects from the harms of automated profiling and decision-making. It also does not mandate that data subjects be informed of the process of automated profiling and decision-making. The opacity of the law and judicial processes in Kafka's 'The Trial' is reflective of how AI applications function. They decide whether the data subjects are eligible for employment, a loan, or an insurance. However, the data subjects are oblivious to the processes employed by AI applications, and the basis on which they make decisions. This resonates with the experiences of the protagonist, K, who is ignorant of the reason of his arrest, because of which he cannot defend the charges against him. Similar to AI applications, when the judiciary in 'The Trial' acts unscrupulously, it cannot be held accountable. | > > | I have identified the proposals of the PDP Bill (alongwith the Recommendations), that are most detrimental to citizens in an AI-regime, below. | | | |
< < | Furthermore, by retaining consent as the primary basis on which personal data is to be collected and processed, the drafters of the PDP Bill have continued with a futile formality, premised on an unfair bargaining power in favor of sellers and service providers, which cannot effectively safeguard data subjects' privacy. | > > | First, the PDP Bill is myopic by neither protecting citizens from the harms of automated profiling and decision-making, nor mandating that they be informed of the processes employed for the same. Similarly, though the Recommendations require the regulator to be informed of a data breach, they disregard the citizens’ right to be aware of a data breach. How can the user take action against the data processor, or fix the data breach, if she is unaware of it in the first place? | | | |
< < | Finally, the Government's think tank, NITI Aayog, has published a number of papers on harnessing AI's potential in India, and what struck me was how flagrantly they advocated for the installation of 'sophisticated surveillance systems' that could track people's movement and behavior. Since the PDP Bill keeps the Government immune from all compliances, surveillance measure such as these can have dire consequences. When I read Orwell's '1984' and Atwood's 'Handmaid's Tale' as a teenager, I never anticipated these flights of fancies to prophesize the future. What troubles me more is that despite hearing and witnessing harrowing accounts of government surveillance, reminding us of Bentham and Foucault's prescient writings on the Panopticon, as data subjects, Indians trust their Kafkaesque governments and corporates with their data (or rather have no choice). | > > | The opacity of the law and judicial processes in Franz Kafka's 'The Trial' is reflective of how AI applications function. They decide whether citizens are eligible for employment, a loan, or an insurance. However, citizens are oblivious to the processes employed by AI applications, and the basis on which they make decisions. This resonates with the experiences of the protagonist, K, who is ignorant of the reason of his arrest, because of which he cannot defend the charges against him. Similar to AI applications, even when the judiciary in 'The Trial' acts unscrupulously, it cannot be held accountable. | | | |
< < | Envisioning a Consumer-Centric Framework | > > | Furthermore, by retaining consent as the primary basis on which personal data is to be collected and processed, the drafters of the PDP Bill have continued with a futile formality, premised on an unfair bargaining power in favor of sellers and service providers, which cannot effectively safeguard citizens’ privacy. | | | |
< < | It is important to envision a policy for AI which protects consumers, while taking into account the various services that the use of AI offers. | > > | Finally, the Government's think tank, NITI Aayog, has published a number of papers on harnessing AI's potential in India, and what struck me was how flagrantly they advocated for the installation of 'sophisticated surveillance systems' that could track people's movement and behavior. Since the PDP Bill keeps the Government immune from all compliances, and the Recommendations further expand the powers enjoyed by the Government under the PDP Bill, by introducing vague grounds such as ‘national security, public order, sovereignty and integrity of India’, surveillance measures such as these, can have dire consequences. Therefore, if these Recommendations are incorporated, the PDP Bill would keep the Government immune from all liability, in disregard of the Supreme Court’s decision in Puttaswamy v. Union of India. What troubles me more is that despite witnessing harrowing accounts of Government surveillance, reminding us of Bentham and Foucault's prescient writings on the Panopticon, Indians trust their Kafkaesque governments with their data (or rather have no choice but to do so). | | | |
< < | India can re-design her privacy law framework to incorporate structural protections such as privacy by design, accuracy, reliability and truth, such that automated decision-making and profiling are carried out in a transparent and non-discriminatory manner. Given the inherent harms involved in AI, an effective regulatory policy governing AI which protects consumers is only as good as the enforcement mechanism that is used to implement it. In this regard, such policy will address aspects such as identifying accountability, cross-border disputes, class action claims, redressal mechanisms, enforcement, and consequences and penalties. The policy should also promote positive use, for the benefit of society, rather than be misused for purposes such as surveillance and racial profiling. It should promote customer choice, and address information asymmetry. Implementation of these principles impacts aspects of consumer protection, which have conventionally not been a part of this framework. Such implementation would therefore involve bolstering the law on subjects such as data protection, intermediary liability, net neutrality, and cybersecurity. The Government should also consider opening up the data and making it freely available to start-ups in an aggregated anonymized format, so that no company monopolizes the use of this data. | > > | Envisioning a Consumer-Centric Framework | | | |
< < | Conclusion
Since big corporates have now assumed an unequal bargaining power compared to individuals, requiring them to adopt ethical self-regulatory frameworks is insufficient and is also undemocratic. Therefore, to the extent self-regulation cannot be guaranteed, it should not be substituted for a regulatory approach. | > > | It is important for India to adopt a policy for AI which protects citizens, and is dependent on their informed choice and consent. The policy should ensure that citizens’ data is not commodified. | | | |
< < | This policy on AI would also require a multi-disciplinary approach. Given the potentially extensive range of transactions that are enabled in AI, specific features in each critical sector must be considered and addressed. It would also require intra-disciplinary inputs from tax, competition, corporate, and human rights law. | > > | In this regard, such policy on AI must incorporate the foundational principles set out below: | | | |
< < | As AI becomes increasingly ubiquitous, India must ensure that her legal framework can defend the privacy, autonomy, and choice of individuals. Stephen Hawking's warning rings true especially now, "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization." | > > | • Agility: It should be nimble to adapt to the ever-evolving nature of AI. | | | |
< < |
Hawking was not talking about puny pattern recognition programs. He was concerned about general artificial intelligence, which is nowhere in sight now any more than it was in his lifetime, | > > | • Information Symmetry: It should incorporate transparency and adequate disclosures to address the inherent information asymmetry between citizens, and corporates and the Government. Care is to be accorded to disclosures made to vulnerable categories of citizens who may not have the capacity to understand digital products/services. | | | |
< < | I recognize the ideas in this essay; they are familiar, of course. I also agree with the criticisms of the supposed data protection bill, about which Mishi and I have written more than once. I am not sure I think the definition of the citizen as "consumer" is precisely the best starting-point for improved legislation. That's where the best route to improvement is, in my view: what are the principles and architecture on which the statutory law should be made?
| > > | • Accountability: It must identify and affix accountability on AI creators for breaches, specifically in light of various geographically de-localized and technologically de-centralized business models. | | | |
> > | • Non-discrimination: Businesses should not create or reinforce unfair biases in the programming of their software (especially in AI implantations), or in the manufacturing of their devises. | | | |
< < |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: | > > | • Positive Use: AI must be harnessed for the benefit of society, rather than be misused for purposes such as surveillance and racial profiling. | | | |
< < | | | \ No newline at end of file | |
> > | Additionally, India must urgently re-design her privacy law framework to incorporate structural protections such as privacy by design, accuracy, reliability and truth, such that automated decision-making and profiling are carried out in a transparent and non-discriminatory manner. Given the inherent harms involved in AI, an effective regulatory policy governing AI which protects consumers is only as good as its enforcement mechanism. Such a policy must also account for cross-border disputes, class action claims, redressal mechanisms, and set out strict consequences and penalties, which apply to corporates and the Government.
Implementation of these principles impacts aspects of consumer protection, which have conventionally not been a part of this framework. Such implementation would therefore involve bolstering the law on data protection, intermediary liability, net neutrality, and cybersecurity. Given the potentially extensive range of transactions that are enabled in AI, it would also require inputs from other disciplines including tax, competition, corporate, and human rights law.
Conclusion
Since big corporates have now assumed an unequal bargaining power compared to individuals, requiring them to adopt ethical self-regulatory frameworks is insufficient. As AI becomes increasingly ubiquitous, India must ensure that her legal framework is bolstered such that it can defend the privacy, autonomy, and choice of individuals.
|
|
StutiShahFirstEssay 2 - 05 Dec 2021 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
ADDRESSING ETHICAL CHALLENGES IN AI | | As AI becomes increasingly ubiquitous, India must ensure that her legal framework can defend the privacy, autonomy, and choice of individuals. Stephen Hawking's warning rings true especially now, "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization." | |
> > |
Hawking was not talking about puny pattern recognition programs. He was concerned about general artificial intelligence, which is nowhere in sight now any more than it was in his lifetime,
I recognize the ideas in this essay; they are familiar, of course. I also agree with the criticisms of the supposed data protection bill, about which Mishi and I have written more than once. I am not sure I think the definition of the citizen as "consumer" is precisely the best starting-point for improved legislation. That's where the best route to improvement is, in my view: what are the principles and architecture on which the statutory law should be made?
| |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. |
|
StutiShahFirstEssay 1 - 22 Oct 2021 - Main.StutiShah
|
|
> > |
META TOPICPARENT | name="FirstEssay" |
ADDRESSING ETHICAL CHALLENGES IN AI
-- By StutiShah - 22 Oct 2021
Background and Context
With an explosion in the volume of commodifiable data, availability of inexpensive computational power, and investment in technology, machine learning that uses pattern recognition and autonomous software-based decision-making known as 'Artificial intelligence' (AI) is making inroads into every facet of our lives. Though the Government and large corporates in India have commenced leveraging the benefits of AI across sectors, including insurance and banking, in an unregulated environment such as India's, AI could prove to be counter-productive. Since the Government is not able to accurately gauge the impact of AI, it has not regulated it as of yet, though it has on several occasions expressed the importance to do so.
Data Privacy
Since data primarily fuels AI, big corporates have been exploiting the loopholes in India's toothless data protection law, while collecting and processing personal data. The current data protection law does not require consent for collecting and processing of personal data, except for a very few types of personal data. Therefore, when a new personal data protection bill (PDP Bill) was published by the Indian Government in 2019, I was hopeful that it would address this vacuum. Though it is significantly more nuanced and favorable to data subjects than the prevailing law on data protection, I was disappointed that it had retained many of its drawbacks. I have identified the drawbacks that are most detrimental to data subjects in an AI-regime below.
First, even though the Indian Supreme Court has recognised decisional autonomy as an integral aspect of the right to privacy in the Indian Constitution, the PDP Bill is myopic by not protecting data subjects from the harms of automated profiling and decision-making. It also does not mandate that data subjects be informed of the process of automated profiling and decision-making. The opacity of the law and judicial processes in Kafka's 'The Trial' is reflective of how AI applications function. They decide whether the data subjects are eligible for employment, a loan, or an insurance. However, the data subjects are oblivious to the processes employed by AI applications, and the basis on which they make decisions. This resonates with the experiences of the protagonist, K, who is ignorant of the reason of his arrest, because of which he cannot defend the charges against him. Similar to AI applications, when the judiciary in 'The Trial' acts unscrupulously, it cannot be held accountable.
Furthermore, by retaining consent as the primary basis on which personal data is to be collected and processed, the drafters of the PDP Bill have continued with a futile formality, premised on an unfair bargaining power in favor of sellers and service providers, which cannot effectively safeguard data subjects' privacy.
Finally, the Government's think tank, NITI Aayog, has published a number of papers on harnessing AI's potential in India, and what struck me was how flagrantly they advocated for the installation of 'sophisticated surveillance systems' that could track people's movement and behavior. Since the PDP Bill keeps the Government immune from all compliances, surveillance measure such as these can have dire consequences. When I read Orwell's '1984' and Atwood's 'Handmaid's Tale' as a teenager, I never anticipated these flights of fancies to prophesize the future. What troubles me more is that despite hearing and witnessing harrowing accounts of government surveillance, reminding us of Bentham and Foucault's prescient writings on the Panopticon, as data subjects, Indians trust their Kafkaesque governments and corporates with their data (or rather have no choice).
Envisioning a Consumer-Centric Framework
It is important to envision a policy for AI which protects consumers, while taking into account the various services that the use of AI offers.
India can re-design her privacy law framework to incorporate structural protections such as privacy by design, accuracy, reliability and truth, such that automated decision-making and profiling are carried out in a transparent and non-discriminatory manner. Given the inherent harms involved in AI, an effective regulatory policy governing AI which protects consumers is only as good as the enforcement mechanism that is used to implement it. In this regard, such policy will address aspects such as identifying accountability, cross-border disputes, class action claims, redressal mechanisms, enforcement, and consequences and penalties. The policy should also promote positive use, for the benefit of society, rather than be misused for purposes such as surveillance and racial profiling. It should promote customer choice, and address information asymmetry. Implementation of these principles impacts aspects of consumer protection, which have conventionally not been a part of this framework. Such implementation would therefore involve bolstering the law on subjects such as data protection, intermediary liability, net neutrality, and cybersecurity. The Government should also consider opening up the data and making it freely available to start-ups in an aggregated anonymized format, so that no company monopolizes the use of this data.
Conclusion
Since big corporates have now assumed an unequal bargaining power compared to individuals, requiring them to adopt ethical self-regulatory frameworks is insufficient and is also undemocratic. Therefore, to the extent self-regulation cannot be guaranteed, it should not be substituted for a regulatory approach.
This policy on AI would also require a multi-disciplinary approach. Given the potentially extensive range of transactions that are enabled in AI, specific features in each critical sector must be considered and addressed. It would also require intra-disciplinary inputs from tax, competition, corporate, and human rights law.
As AI becomes increasingly ubiquitous, India must ensure that her legal framework can defend the privacy, autonomy, and choice of individuals. Stephen Hawking's warning rings true especially now, "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization."
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
|
|
|