Tag Archives: ethics

Artificial Intelligence Regulation

In previous articles, we have discussed how artificial intelligence is biased, and on how this problem of biased artificial intelligence persists. As artificial intelligence (AI) is becoming ever more prevalent, this poses many ethical problems. The question was raised whether the industry could be trusted to self-regulate or whether legal frameworks would be necessary. In this article, we explore current initiatives for Artificial Intelligence regulation. We look at initiatives within the industry to regulate artificial intelligence as well as at attempts to create legal frameworks for Artificial Intelligence. But first we investigate why regulation is necessary.

Why is Artificial Intelligence Regulation necessary?

Last year, the Council of Europe published a paper where it concluded that a legal framework was needed because there were substantive and procedural gaps. UNESCO, too, identified key issues in its Recommendation on Ethics in Artificial Intelligence. Similarly, in its White Paper on Trustworthy AI, The Mozilla Foundation identifies a series of key challenges that need to be addressed and that makes regulation desirable. These are:

  • Monopoly and centralization: Large-scale AI requires a lot of resources and at present only a handful of tech giants have those. This has a stifling effect on innovation and competition.
  • Data privacy and governance:  Developing complex AI systems necessitates vast amounts of data. Many AI systems that are currently being developed by large tech companies harvest people’s personal data through invasive techniques, and often without their knowledge or explicit consent.
  • Bias and discrimination: As was discussed in previous articles, AI relies on computational models, data, and frameworks that reflect existing biases. This in turn results in biased or discriminatory outcomes.
  • Accountability and transparency: Many AI systems just present an outcome without being able to explain how that result was reached. This can be the product of the algorithms and machine learning techniques that are being used, or it may be by design to maintain corporate secrecy. Transparency is needed for accountability and to allow third-party validation.
  • Industry norms: Tech companies tend to build and deploy tech rapidly. As a result, many AI systems are embedded with values and assumptions that are not questioned in the development cycle.
  • Exploitation of workers: Research shows that tech workers who perform the invisible maintenance of AI are vulnerable to exploitation and overwork.
  • Exploitation of the environment: The amount of energy needed for AI data mining makes it very environment unfriendly. The development of large AI systems intensifies energy consumption and speeds up the extraction of natural resources.
  • Safety and security: Cybercriminals have embraced AI. They are able to carry out increasingly sophisticated attacks by exploiting AI systems.

For all these reasons, the regulation of AI is necessary. Many large tech companies still promote the idea that the industry should be allowed to regulate itself. Many countries, as well as the EU, on the other hand believe the time is ripe for governments to impose a legal framework to regulate AI.

Initiatives within the industry to regulate Artificial Intelligence

Firefox and the Mozilla Foundation

The Mozilla Foundation is one of the leaders in the field when it comes to promoting trustworthy AI. They already have launched several initiatives, including advocacy campaigns, responsible computer science challenges, research, funds, and fellowships. The Foundation also points out that “developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible.” They are convinced that the “best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world.”

IBM

IBM, too, promotes an ethical and trustworthy AI, and has created its own ethics board. It believes AI should be built on the following principles:

  • The purpose of AI is to augment human intelligence
  • Data and insights belong to their creator
  • Technology must be transparent and explainable

To that end, it identified five pillars:

  • Explainability: Good design does not sacrifice transparency in creating a seamless experience.
  • Fairness: Properly calibrated, AI can assist humans in making fairer choices.
  • Robustness: As systems are employed to make crucial decisions, AI must be secure and robust.
  • Transparency: Transparency reinforces trust, and the best way to promote transparency is through disclosure.
  • Privacy: AI systems must prioritize and safeguard consumers’ privacy and data rights.

Google

Google says it “aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good.

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles.”

It also made it clear that it “will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

It adds that that list may evolve.

Still, Google seems to have a troubled relationship with ethical AI. It notoriously fired its entire ethics board in 2019, to replace it with a team of ethical AI researchers. When subsequently, on separate occasions, two of those were fired too, it again made headlines.

Facebook / Meta

Whereas others talk about trustworthy and ethical Ai, Meta (the parent company of Facebook) on the other hand has different priorities and talks about responsible AI. It, too, identifies five (or ten) pillars:

  1. Privacy & Security
  2. Fairness & Inclusion
  3. Robustness & Safety
  4. Transparency & Control
  5. Accountability & Governance

Legal frameworks for Artificial Intelligence

Apart from those initiatives within the industry, there are proposals for legal frameworks as well. Best known is the EU AI Act. Others are following suit.

The EU AI Act

The EU describes its AI act as “a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”

The text can be misleading as, effectively, the proposal distinguishes not three but four levels of risk for AI applications: 1) unacceptable risk, which are banned, 2) high-risk, which must be regulated with specific legal requirements, 3) low risk, where most of the time no regulation will be necessary, and 4) no risk, which do not have to be regulated at all.

By including an ‘unacceptable risk‘ category, the proposal introduces the idea that certain types of AI applications should be forbidden because they violate basic human rights. All applications that manipulate human behaviour to deprive users of their free will, as well as systems that allow social scoring fall in this category. Exceptions are allowed for military purposes and law enforcement purposes.

High risk systems “include biometric identification, management of critical infrastructure (water, energy etc), AI systems intended for assignment in educational institutions or for human resources management, and AI applications for access to essential services (bank credits, public services, social benefits, justice, etc.), use for police missions as well as migration management and border control.” Again, there are exceptions, many of which have to do with cases where biometric identification is allowed. These include, e.g., missing children, suspects of terrorism, trafficking, and child pornography. The EU wants to create a database that keeps track of all high-risk applications.

Limited risk or low risk applications includes various bots which companies use to interact with their customers. The idea here is that transparency is required. Users must know, e.g., that they are interacting with a chat bot and to what information the chat bot has access.

All AI systems that do not pose any risk to citizen’s rights are considered no risk applications for which no regulation is necessary. These applications include games, spam filters, etc.

Who does the EU AI Act apply to? As is the case with the GDPR, the EU AI Act does not apply exclusively to EU-based organizations and citizens. It also applies to anybody outside of the EU who is offering an AI application (product or service) within the EU, or if an AI system uses information about EU citizens or organizations. Furthermore, it also applies to systems outside of the EU that use results that are generated by AI systems within the EU.

A work in progress: the EU AI Act is still very much a work in progress. The Commission made its proposal and now the legislators can give feedback. At present, more than a thousand amendments have been submitted. Some factions think the framework goes too far, while others claim it does not go far enough. Much of the discussions deal with both defining and categorizing AI systems.

Other noteworthy initiatives

Apart from the European AI Act, there are some other noteworthy initiatives.

Council of Europe: The Council of Europe (responsible for the European Convention on Human Rights) created its own Ad Hoc Committee on Artificial Intelligence. This Ad Hoc Committee published a paper in 2021, called A Legal Framework for AI Systems. The paper was a feasibility study explored the reasons as to why a legal framework on the development, design, and application of AI, based on Council of Europe’s standards on human rights, democracy and the rule of law is needed. It identified several substantive and procedural gaps and concluded that a comprehensive legal framework is needed, combining both binding and non-binding instruments.

UNESCO published a series of Recommendations on Ethics of Artificial Intelligence, which were endorsed by 193 countries in November 2021.

US: On 4 October, the White House released the Blueprint for an AI Bill of Rights to set up a framework that can protect people from the negative effects of AI.

No government initiatives exist yet in the UK. But Cambridge University, on 16 September 2022, published a paper on A Legal Framework for Artificial Intelligence Fairness Reporting.

 

Sources:

Artificial Intelligence, Ethics, and Law

Every day, artificial intelligence (AI) is becoming more entrenched in our lives. Even cheap smart phones have cameras that use AI to optimize the pictures we are taking. Try getting online assistance for a problem you are facing, and you are likely to first be met by a chatbot rather than a person. We have self-driving cars, trucks, buses, taxis, trains, etc. AI can be a force for good, but it can also be a force for bad. Cybercriminals are using AI to steal identities and corporate secrets, to gain illegal access to systems, transfer funds, avoid police detection, etc. AI is being weaponized and militarized. This raises ethical concerns, and the possibility that legal frameworks will have to be implemented to address those concerns.

Let us first touch upon some of the ethical problems we are already being confronted with. The use of facial recognition software that is being implemented in airports and big cities raises both privacy and security concerns. The same concerns pertain to using big data for Machine Learning. In previous articles, we already paid attention to the problem of bias in AI, where the AI algorithms inherit our biases because they are reflected in the data sets they use. One of the areas where the ethical issues of AI really come to the forefront is with self-driving vehicles. Let us explore that example more in depth.

Sometimes, traffic accidents cannot be avoided, and those may lead to fatalities. Imagine the brakes of your car stop functioning, while you are driving down a street. Ahead of you are some children getting out of car that is standing still, in the lane for oncoming traffic a truck is coming, and on the far side of the road some people are on the pavement talking. What do you do? And what is a self-driving car supposed to do? With self-driving cars, the car maker may have to make the decision for you.

In ethics, this problem is usually referred to as the Trolley Problem. A runaway trolley is racing down a railroad track, and you are standing at a switch that can change the track it is on. If you do not do a thing, five people will be killed. If you switch the lever, one person will be killed. What is the right thing to do?

The Moral Machine experiment is the name of an online project where different variations of the Trolley Problem were presented to people from all over the world. It asked questions to determine whether saving humans should be prioritized over animals (including pets), passengers over pedestrians, more lives over fewer, men over women, young over old, etc. It even asked whether healthy and fit people should be prioritized over sick ones, people with a high social status over people with a low social status, or law-abiding citizens over ones with criminal records. Rather than posing the questions directly the survey typically would present people with combined options: kill three elderly pedestrians or three youthful passengers?

Overall, the experiment gathered 40 million decisions in 10 languages from millions of people in 233 countries and territories. Surprisingly, the results tended to vary greatly from country to country, from culture to culture and along lines of economics. “For example, participants from collectivist cultures like China and Japan are less likely to spare the young over the old—perhaps, the researchers hypothesized, because of a greater emphasis on respecting the elderly. Similarly, participants from poorer countries with weaker institutions are more tolerant of jaywalkers versus pedestrians who cross legally. And participants from countries with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status.” (Karen Hao, in Technology Review)

In general, people agreed across the world that sparing the lives of humans over animals should take priority, and that many people should be saved rather than few. In most countries, people also thought the young should be preserved over the elderly, but as mentioned above, that was not the case in the Far East.

Now, this of course raises some serious questions. Who is going to make those decisions and what will they be choosing, considering these different choices people suggested? Are we going to have different priorities depending on whether we are using e.g. Japanese or German self-driving cars? Or will the car makers have the car make different choices based on where the car is driving? And what if more lives can be spared if we sacrifice the driver?

When it comes to sacrificing the driver, one car manufacturer, Mercedes, has already made clear that will never be an option. The justification they give, is that self-driving cars will lead to far fewer accidents and fatalities, and that those occasions where pedestrians are sacrificed for drivers will be cases of acceptable collateral damage. But is that the right choice, and is it really up to the car maker to make that choice?

An ethicist identified four chief concerns that must be addressed when we look for solutions with regard to ethical AI:

  1. Whose moral standards should be used?
  2. Can machines converse about moral issues? (What if e.g. multiple self-driving vehicles are involved? Will they communicate with each other to choose the best scenario?)
  3. Can algorithms take context into account?
  4. Who should be accountable?

Based on these considerations, some principles can be established to regulate the use of AI. In a previous article we already mentioned the principles the EU and OECD suggest. In 2018, the World Economic Forum also had already suggested 5 core principles to keep AI ethical:

  • AI must be a force for good and diversity
  • Intelligibility and fairness
  • Data protection
  • Flourishing alongside AI
  • Confronting the power to destroy

An initiative that involves several tech companies also identified seven critical points:

  1. Invite ethics experts that reflect the diversity of the world
  2. Include people who might be negatively impacted by AI
  3. Get board-level involvement
  4. Recruit an employee representative
  5. Select an external leader
  6. Schedule enough time to meet and deliberate
  7. Commit to transparency

A deeper question, however, is whether the regulation of AI should really be left to the industry? Shouldn’t these decisions rather be made by governments? The people behind the Moral Machine experiment think so, as do many scientists and experts in ethics. Thus far, however, not much has been done when it comes to legal solutions. At present, there are no legal frameworks in place. The best we have is for members of the EU and the OECD who have put some guidelines in place, but those are merely guidelines that are not enforceable. And that is not enough. A watchdog organization in the UK warned that AI is progressing so fast that we already are having difficulties catching up. We cannot afford postponing addressing these issues any longer.

Sources:

International Guidelines for Ethical AI

In the last two months, i.e. in April and May 2019, both the EU Commission and the OECD published guidelines for trustworthy and ethical Artificial Intelligence (AI). In both cases, these are only guidelines and, as such, are not legally binding. Both sets of guidelines were compiled by experts in the field. Let’s have a closer look.

“Why do we need guidelines for trustworthy, ethical AI?” you may ask. Over the last years, there have been multiple calls, from experts, researchers, lawmakers and the judiciary to develop some kind of legal framework or guidelines for ethical AI.  Several cases have been in the news where the ethics of AI systems came into question. One of the problem areas is bias with regard to gender or race, etc. There was, e.g., the case of COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It turned out the system had a double racial bias, one in favour of white defendants, and one against black defendants. More recently, Amazon shelved its AI HR assistant because it systematically favoured male applicants. Another problem area is privacy, where there are concerns about deep learning / machine learning, and with technologies like, e.g., facial recognition.

In the case of the EU guidelines, another factor is at play as well. Both the US and China have a substantial lead over the EU when it comes to AI technologies. The EU saw its niche in trustworthy and ethical AI.

EU Guidelines

The EU guidelines were published by the EU Commission on 8 April 2019. (Before that, in December 2018, the European Parliament had already published a report in which it asked for a legal framework or guidelines for AI. The EU Parliament suggested AI systems should be broadly designed in accordance with The Three Laws of Robotics). The Commission stated that trustworthy AI should be:

  • lawful, i.e. respecting all applicable laws and regulations,
  • ethical, i.e. respecting ethical principles and values, and
  • robust, both from a technical perspective while taking into account its social environment.

To that end, the guidelines put forward a set of 7 key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall-back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should consider the environment, including other living beings, and their social and societal impact should be carefully considered.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

A pilot project will be launched later this year, involving the main stakeholders. It will review the proposal more thoroughly and provide feedback, upon which the guidelines can be finetuned. The EU also invites interested business to join the European AI Alliance.

OECD

The OECD consists of 36 members, approximately half of which are EU members. Non-EU members include the US, Japan, Australia, New Zealand, South-Korea, Mexico and others. On 22 May 2019, the OECD Member Countries adopted the OECD Council Recommendation on Artificial Intelligence. As is the case with the EU guidelines, these are recommendations that are not legally binding.

The OECD Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Consistent with these value-based principles, the OECD also provides five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

As you can see, many of the fundamental principles are similar in both sets of guidelines. And, as mentioned before, these EU and OECD guidelines are merely recommendations that are not legally binding. As far as the EU is concerned, at some point in the future, it may push through actual legislation that is based on these principles. The US has already announced it will adhere to the OECD recommendations.

 

Sources:

Lawyers and Tech Competency

Lawyers and technology often have a strenuous relationship, with many lawyers displaying a distinct reluctance to familiarizing themselves with new technologies. Still, tech competency not only provides a competitive edge, but, by now, for most lawyers it also has become an ethical requirement.

In the US, e.g., the American Bar Association’s House of Delegates formally approved a change to the Model Rules of Professional Conduct in August 2012. The new text makes it clear that lawyers have a duty to be competent not only in the law and its practice, but also in technology. Following this change, a lack in tech competency could lead to disciplinary action for misconduct.

The new text of Comment 8 to Model Rule 1.1, which pertains to competence, now states (emphasis added):

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.

The rule requires lawyers to keep up with the wide range of technology that can be used in the delivery of their services. This means they must stay abreast of the potential risks and benefits associated with any technology they use. It applies, e.g., to Word processing software, email services, security, including safeguarding confidential information, and practice management tools. In some cases, it may even apply to e-discovery or metadata analysis. Casey Flaherty gives the example that a lawyer should probably know how to convert document to PDF, or at least know how to create a document that is completely ready to be converted. In another example, he mentions that a lawyer who is working on a contract with numbered clauses and delegates it to another lawyer should know how to use automatic numbering and cross-referencing.

The competence clause adopted by the American Bar Association is a model rule, which means it must be adopted in a state for it to apply there.  By now, 26 States have done so, and impose an ethical duty of legal tech competence.

As a model rule, each state can implement the rule as it sees fit. In Florida, e.g., this implies, as of 1 January 2017, that all lawyers as a part of their Continuing Legal Education, are required to spend a minimum of three hours over three years in an approved technology program. California, on the other hand, requires lawyers to have knowledge of e-discovery. Indeed, in an age when any court case can involve electronic evidence, every Californian attorney who steps foot in a courtroom has a basic duty of competence with regard to e-discovery.

The rule does not require lawyers to become a technology experts, as they can use the assistance of advisors who have the necessary knowledge. Florida’s competence rule, e.g., states that “… competent representation may involve a lawyer’s association with, or retention of, a non-lawyer advisor with established technological competence in the relevant field.”

Coming back to the example with regard to California and e-discovery, it means that a lawyer in California could face disciplinary action for not properly handling the e-discovery aspects of a case. Robert Ambrogi, in Above the Law, puts it as follows:

That is the key: You need to know enough about e-discovery to assess your own capability to handle the issues that may arise and, if you lack sufficient capability, you can effectively “contract out” your competence to someone else. That someone else could be another attorney in your firm, an outside attorney, a vendor or even your client, the opinion says, provided the person has the necessary expertise. (You cannot, however, contract out your duty to supervise the case and protect your client’s confidentiality.)

By now, two courts have already confirmed that tech competency is required for lawyers. One judge stated that “Professed technological incompetence is not an excuse for discovery misconduct.”

Because of the growing demand for tech-savvy lawyers, several Law School Deans are pushing to add tech to the curriculum. They generally agree that “law schools are a bit remiss in not offering more technology-based training to law students and that they should include legal technology training in the current law school curriculum. The roundtable concluded with the collective position that all law schools in the U.S. owe it to their student bodies to introduce technology-oriented topics into the curriculum in some form or fashion.”

 

Sources: