Tag Archives: AI

The dangers of artificial intelligence

Artificial intelligence (AI) is a powerful technology that can bring many benefits to society. However, AI also poses significant risks and challenges that need to be addressed with caution and responsibility. In this article, we explore the questions, “What are the dangers of artificial intelligence?”, and “Does regulation offer a solution?”

The possible dangers of artificial intelligence have been making headlines lately. First, Elon Musk and several experts called for a pause in the development of AI. They were concerned that we could lose control over AI considering how much progress has been made recently. They expressed their worries that AI could pose a genuine risk to society. A second group of experts, however, replied that Musk and his companions were severely overstating the risks involved and labelled them “needlessly alarmist”. But then a third group again warned of the dangers of artificial intelligence. This third group included people like Geoffrey Hinton, who has been called the godfather of AI. They even explicitly stated that AI could lead to the extinction of humankind.

Since those three groups stated their views, many articles have been written about the dangers of AI. And the calls to regulate AI have become louder than ever before. (We published an article on initiatives to regulate AI in October 2022). Several countries have started taking initiatives.

What are the dangers of artificial intelligence?

So, what are the dangers of artificial intelligence? As with any powerful technology, it can be used for nefarious purposes. It can be weaponized and used for criminal purposes. But even the proper use of AI holds inherent risks and can lead to unwanted consequences. Let us have a closer look.

A lot of attention has already been paid in the media to the errors, misinformation, and hallucinations of artificial intelligence. Tools like ChatGPT are programmed to sound convincing, not to be accurate. It gets its information from the Internet, but the Internet contains a lot of information that is not correct. So, its answers will reflect this. Worse, because it is programmed to provide any answer if it can, it sometimes just makes things up. Such instances have been called hallucinations. In a lawsuit in the US, e.g., a lawyer had to admit that the precedents he had quoted did not exist and were fabricated by ChatGPT. (In a previous article on ChatGPT, we warned that any legal feedback it gives must be double-checked).

As soon as ChatGPT became available, cybercriminals started using it to their advantage. A second set of dangers therefore has to do with cybercrime and cybersecurity threats: AI can be exploited by malicious actors to launch sophisticated cyberattacks. This includes using AI algorithms to automate and enhance hacking techniques, identify vulnerabilities, and breach security systems. Phishing attacks have also become more sophisticated and harder to detect.

AI can also be used for cyber espionage and surveillance: AI can be employed for sophisticated cyber espionage activities, including intelligence gathering, surveillance, and intrusion into critical systems. Related to this is the risk of invasion of privacy and data manipulation. AI can collect and analyse massive amounts of personal data from various sources, such as social media, cameras, sensors, and biometrics. This can enable AI to infer sensitive information about people’s identities, preferences, behaviours, and emotions. AI can also use this data to track and monitor people’s movements, activities, and interactions. This can pose threats to human rights, such as freedom of expression, association, and assembly.

Increased usage of AI will also lead to the loss of jobs due to automation. AI can perform many tasks faster and cheaper than humans, which will lead to unemployment and inequality. An article on ZD Net estimates that AI could automate 300 million jobs. Approximately 28% of current jobs could be at risk.

There also is a risk of loss of control. As AI systems become more powerful, there is a risk that we will lose control over them. This could lead to AI systems making decisions that are harmful to humans, such as launching a nuclear attack or starting a war. This risk of the loss of control is a major concern about the weaponization of AI. As AI technology advances, there is a worry that it could be weaponized by state or non-state actors. Autonomous weapon systems equipped with AI could potentially make lethal decisions without human intervention, leading to significant ethical and humanitarian concerns.

We already mentioned errors, misinformation, and hallucinations. Those are involuntary side-effects of AI.  A related danger of AI is the deliberate manipulation and misinformation of society through algorithms. AI can generate realistic and persuasive content, such as deepfakes, fake news, and propaganda, that can influence people’s opinions and behaviours. AI can also exploit people’s psychological biases and preferences to manipulate their choices and actions, such as online shopping, voting, and dating.

Generative AI tends to use existing data as its basis for creating new content. But this can cause issues of infringement of intellectual property rights. (We briefly discussed this in our article on generative AI).

Another risk inherent to the fact that AI learns from large datasets is bias and discrimination. If this data contains biases, then AI can amplify and perpetuate them. This poses a significant danger in areas such as hiring practices, lending decisions, and law enforcement, where biased AI systems can lead to unfair outcomes. And if AI technologies are not accessible or affordable for all, they could exacerbate existing social and economic inequalities.

Related to this are ethical implications. As AI systems become more sophisticated, they may face ethical dilemmas, such as decisions involving human life or the prioritization of certain values. Think, e.g., of self-driving vehicles when an accident cannot be avoided: do you sacrifice the driver if it means saving more lives? It is crucial to establish ethical frameworks and guidelines for the development and deployment of AI technologies. Encouraging interdisciplinary collaboration among experts in technology, ethics, and philosophy can help navigate these complex ethical challenges.

At present, there is insufficient regulation regarding the accountability and transparency of AI. As AI becomes increasingly autonomous, accountability and transparency become essential to address the potential unintended consequences of AI. In a previous article on robot law, we asked the question who is accountable when, e.g., a robot causes an accident. Is it the manufacturer, the owner, or – as AI becomes more and more self-aware – could it be the robot? Similarly, when ChatGPT provides false information, who is liable? In the US, Georgia radio host Mark Walters found that ChatGPT was spreading false information about him, accusing him of embezzling money. So, he is suing OpenAI, the creators of ChatGPT.

As the abovementioned example of the lawyer quoting non-existing precedents illustrated, there also is a risk of dependence and overreliance: Relying too heavily on AI systems without proper understanding or human oversight can lead to errors, system failures, or the loss of critical skills and knowledge.

Finally, there is the matter of superintelligence that several experts warn about. They claim that the development of highly autonomous AI systems with superintelligence surpassing human capabilities poses a potential existential risk. The ability of such systems to rapidly self-improve and make decisions beyond human comprehension raises concerns about control and ethical implications. Managing this risk requires ongoing interdisciplinary research, collaboration, and open dialogue among experts, policymakers, and society at large. On the other hand, one expert said that it is baseless to automatically assume that superintelligent AI will become destructive, just because it could. Still, the EU initiative includes the requirement of building in a compulsory kill switch that allows to switch the AI off at any given moment.

Does regulation offer a solution?

In recent weeks, several countries have announced initiatives to regulate AI. The EU already had its own initiative. At the end of May, its tech chief Margrethe Vestager said she believed a draft voluntary code of conduct for generative AI could be drawn up “within the next weeks”, with a final proposal for industry to sign up “very, very soon”. The US, Australia, and Singapore also have submitted proposals to regulate AI.

Several of the abovementioned dangers can be addressed through regulation. Let us go over some examples.

Regulations for cybercrime and cybersecurity should emphasize strong cybersecurity measures, encryption standards, and continuous monitoring for AI-driven threats.

To counter cyber espionage and surveillance risks, we need robust cybersecurity practices, advanced threat detection tech, and global cooperation to share intelligence and establish norms against cyber espionage.

Privacy and data protection regulations should enforce strict standards, incentivize secure protocols, and impose severe penalties for breaches, safeguarding individuals and businesses from AI-enabled cybercrime.

To prevent the loss of jobs, societies need to invest in education and training for workers to adapt to the changing labour market and create new opportunities for human-AI collaboration.

Addressing AI weaponization requires international cooperation, open discussions, and establishing norms, treaties, or agreements to prevent uncontrolled development and use of AI in military applications.

To combat deepfakes and propaganda, we must develop ethical standards and regulations for AI content creation and dissemination. Additionally, educating people on critical evaluation and information verification is essential.

Addressing bias and discrimination involves ensuring diverse and representative training data, rigorous bias testing, and transparent processes for auditing and correcting AI systems. Ethical guidelines and regulations should promote fairness, accountability, and inclusivity.

When it comes to accountability and transparency, regulatory frameworks can demand that developers and organizations provide clear explanations of how AI systems make decisions. This enables better understanding, identification of potential biases or errors, and the ability to rectify any unintended consequences.

At the same time, regulation also has its limitations. While it is important, e.g., to regulate things like cybercrime or the weaponization of AI, it is also clear that the regulation will not put an end to these practices. After all, by definition, cybercriminals don’t tend to care about any regulations. And despite the fact that several types of weapons of mass destruction have been outlawed, it is also clear that they are still being produced and used by several actors. But regulation does help to keep the trespassers accountable.

It is also difficult to assess how disruptive the impact of AI will be on society. Depending on how disruptive it is, additional measures may be needed.

Conclusion

We have reached a stage where AI has become so advanced that it will change the world and the way we live. This is already creating issues that need to be addressed. And as with any powerful technology, it can be abused. Those risks, too, need to be addressed. But while we must acknowledge these issues, it should also be clear that the benefits outweigh the risks, as long as we don’t get ahead of ourselves. At present, humans abusing AI are a greater danger than AI itself.

 

Sources:

 

An introduction to smart contracts

In a previous article, we have written about Artificial Intelligence (AI) and contracts. AI is having an impact in three areas when it comes to contracts: 1. contract review, 2. contract management and automation, and 3. smart contracts. While smart contracts are automated contracts, what sets them apart from other automated contracts is the usage of Blockchain technology.

What are smart contracts? We’ll combine elements from the definitions Tech Republic and the Investopedia to explain: A smart contract is a software-based contract between a buyer and a seller. The software automates the business processes and the conditions of fulfilment contained within the contract. The code programmed into the contract actually makes the contract self-executing so that it takes action whenever a specific condition is triggered within the contract. The code and the agreements contained therein exist across a distributed, decentralized Blockchain network. Smart contracts permit trusted transactions and agreements to be carried out among disparate, anonymous parties without the need for a central authority, legal system, or external enforcement mechanism. They render transactions traceable, transparent, and irreversible. Because the smart contract is software capable of automating business processes and contract fulfilment automatically, it eliminates the need for managers and middlemen supervision.

Let’s give an example: A is a supplier of products for B. Every month, B places an order with A. It makes sense to automate this process. The smart contract is a piece of software that, e.g., would contain the code that says if an order is received by A from B, and B is not in arrears, then that order must be executed. Now, with smart contracts these transactions are typically registered in a distributed, decentralized Blockchain network of ledgers. In a previous article we explained that Blockchain is a technology that registers transactions in a ledger, where everybody in the network has a copy of that ledger. Transactions are secured by using a verification code that is calculated based on all previous transactions in the ledger. In essence, to forge a transaction, one would therefore have to forge all registrations of all transactions in all ledgers.

The benefits of smart contracts are clear: the whole process of transactions between parties can be automated, and by using Blockchain technology one has virtually irrefutable proof of the transactions. Add to that that programming code tends to be less ambiguous than the generic legalese of traditional contracts, so the chances of disputes about the interpretation of smart contracts are smaller.

The usage of smart contracts is expected to grow fast. A survey published in Forbes Magazine predicts that by 2022, 25% of all companies will be using them. Basically in any market where Blockchain technology is useful, one can expect smart contracts to be useful, too.  Smart contracts can also be the perfect complement to E.D.I. At present, smart contract applications are already being used in – or developed for – supply chains and logistics, in finance and securities, real estate, management and operations, healthcare, insurance, etc.

Still, one has to be aware of the limitations of smart contracts, as there are a number of legal issues to take into account. The name ‘smart contracts’ is misleading in that they aren’t really contracts but software. As such, there are legal concerns with regard to:

  • Offer and acceptance: is there even a binding contract, if there is no human interaction or supervision, and the transaction is completely executed automatically?
  • The evidentiary value: smart contracts are not written evidence of agreed rights and obligations because they encapsulate only a portion of any rights and obligations that is related to contractual performance
  • Jurisdiction: is the area of jurisdiction clearly defined in case of a conflict or dispute?
  • Dispute Resolution: are there any dispute resolution mechanisms in place?

When considering working with smart contracts, it is therefore a good idea to first come to a framework agreement in which these issues are addressed. And those will preferably still be written by lawyers.

 

Sources:

 

When robot lawyers and lawyers compete

‘Man versus Machine’ contests are always popular. We have already seen such contest where Artificial Intelligence (AI) systems defeat human experts at chess, Jeopardy, and Go. And in recent months, robot lawyers have been giving human lawyers a run for their money, too. In the headlines this week is a story of an Artificial Intelligence system outperforming human lawyers in reviewing non-disclosure agreements. Some months ago, in October 2017, there was a similar story about an AI system that was better at predicting the outcome of cases about Payment Protection Insurance than human lawyers were. Let’s start with the latter.

Case Cruncher Alpha

One of the first occasions where robot lawyers beat their human counterparts happened in October 2017, in the UK. The Artificial Intelligence system taking on the human lawyers was Case Cruncher Alpha, developed by Casecrunch. Casecrunch is the current name for the company that previously had developed LawBot, Lawbot-X, and Divorcebot. It was started by a group of Cambridge University Law students. (See our article on Legal Chatbots). Case Cruncher Alpha is designed to study existing legal cases in order to predict the outcome of similar cases.

In the contest, CaseCruncher Alpha competed with 112 lawyers. All of them were commercial lawyers from London. None of them were experts in the specific matter the cases dealt with.

The purpose of the contest was to predict the outcome (success or failure) of cases dealing with Payment Protection Insurance (PPI). The topic is well known in the UK, where several banks were ordered to pay damages to customers to whom they had sold insurance products that they didn’t require. The contestants were given real cases that had been handled by the UK Financial Ombudsman Service. The lawyers were permitted to use all available resources.

The result was a staggering defeat for the lawyers: they achieved an average accuracy score of 62%, while Case Cruncher Alpha scored 86.6 %. (No data were provided on how individual lawyers did, or on whether there were any that had scored better than the AI system).

Richard Tromans from artificiallawyer.com rightfully points out that evaluating the results of this contest is tricky. They do not mean that machines are generally better at predicting outcomes than lawyers. What they do show is that if the question is defined precisely enough, machines can compete with, and sometimes outperform human lawyers. At the same time, this experiment also suggests that there may be factors other than legal factors that contributed to the outcome of the cases. But what the contest undoubtedly made clear was that legal prediction systems can solve legal bottlenecks.

Ian Dodd was one of the judges. He believes AI may replace some of the grunt work done by junior lawyers and paralegals, but that no machine can talk to a client or argue in front of a High Court judge. In his view, “The knowledge jobs will go, the wisdom jobs will stay.”

LawGeex

Another occasion where robot lawyers got the upper hand was in a contest in February 2018, in the US. In this case, the Artificial Intelligence system was developed by LawGeex, the company that was started in 2014 by commercial lawyer Noory Bechor (cf. our previous article on AI and contracts). Noory Bechor had come to the realization that 80% of his work consisted of reviewing contracts and was highly repetitive. He believed it could be done faster, cheaper and more effectively by a computer, and that’s why he started LawGeex.

In this challenge, the AI system competed with 20 lawyers in reviewing Non-Disclosure Agreements. All 20 lawyers were experts in the field. The LawGeex report explains: “The study asked each lawyer to annotative five NDAs according to a set of Clause Definitions. Each lawyer was given four hours to find the relevant issues in all five NDAs.” In that time, they had to identify 30 legal issues, including arbitration, confidentiality of relationship, and indemnifications. They were given scores reflecting how accurately they identified each issue.

Once again, the AI system did better than the human lawyers. It achieved an accuracy rate of 94%, whereas the lawyers achieved an average of 85%. There were, however, considerable differences between how well the lawyers did. The lowest performing lawyer, e.g., only scored 67%, while the two best performing lawyers beat the AI system and achieved, depending on the source, either 94 or 95% accuracy. For one specific NDA, one lawyer only identified 55% of the relevant issues, while for another one the AI system reached a score of 100%, where the best human lawyers reached 97% for that one.

The human lawyers were no competition for the AI system when it came to speed. The AI system cleared the job in 26 seconds, while the lawyers took 92 minutes on average. The longest time one lawyer took, was 151 minutes, while the shortest time was 51 minutes.

Gillian K. Hadfield is a Professor of Law and Economics at the University of Southern California who advised on the test. She says, “This research shows technology can help solve two problems: both making contracts faster and more reliable, and freeing up resources so legal departments can focus on building the quality of their human legal teams.” In other words, the use of AI can actually help lawyers expedite their work, and free them up to focus on tasks that still require human skills.

 

Sources:

Robots, Liability and Personhood

There were some interesting stories in the news lately about Artificial Intelligence and robots. There was the story, e.g., of Sophia, the first robot to officially become a citizen of Saudi Arabia. There also was a bar exam in the States that included the question whether we are dealing with murder or product liability, if a robot kills somebody. Several articles were published on companies that build driverless cars and how they have to contemplate ethical and legal issues when deciding what to do when an accident is unavoidable. If a mother with a child on her arm, e.g., unexpectedly steps in front of the car, what does the car do? Does it hit the mother and child? Does it avoid the mother to hit the motorcycle in the middle lane that is waiting to turn? Or does it go into the lane of oncoming traffic, with the risk of killing the people in the driverless car? All these stories raise some thought-provoking legal issues. In this article we’ll have a cursory glance at personhood and liability with regard to Artificial Intelligence systems.

Let’s start with Sophia. She is a robot designed by Hong Kong-based AI robotics company Hanson Robotics. Sophia is programmed to learn and mimic human behaviour and emotions, and functions autonomously. When she has a conversation with someone, her answers are not pre-programmed. She made world headlines when in October 2017, she attended a UN meeting on artificial intelligence and sustainable development and had a brief conversation with UN Deputy Secretary-General Amina J. Mohammed.

Shortly after, when attending a similar event in Saudi Arabia, she was granted citizenship in the country.  This was rightfully labelled as a historical event, as it was the first time ever a robot or AI system was granted such a distinction. This instantly raises many legal questions, e.g., regarding the legal rights and obligations artificial intelligence systems can have. Or what should the criteria be for personhood for robots and artificial intelligence systems? And what about liability?

Speaking of liability: a bar exam recently included the question: “if a robot kills somebody, is it murder or product liability?” The question was inspired by an article in Slate Magazine, by Ryan Calo, which discussed Paolo Bacigalupi’s novel The Windup Girl. The novel is about an artificial girl, the Mika Model, which strives to copy human behaviour and emotions, and is designed to create its own individuality. In this particular case, the model seems to develop a mind of her own, and she ends up killing somebody who was torturing her. So Bacigalupi’s protagonist, Detective Rivera, finds himself asking a canonical legal question: when a robot kills, is it murder or product liability?

At present, the rule would still be that the manufacturer is liable. But that can change soon. AI systems can make their own decisions, and are becoming more and more autonomous What if intent can be proven? What if, as in Bacigalupi’s novel, the actions of the robot are acts of self-preservation? Can we say that the Mika Model acted in self-defence? Or, coming back to Sophia: what if she, as a Saudi Arabian citizen, causes damage? Or commits blasphemy? Who is liable, the system or its manufacturer?

At a panel discussion in the UK, a third option was suggested with regard to the liability issue. One expert compared robots and AI systems to pets, and the manufacturers to breeders. In his view, if a robot causes damage, the owner is liable, unless he can prove it was faulty, in which case the manufacturer could be held liable.

The discussion is not an academic one, as we can expect such cases to be handled by courts in the near future.  In January 2017, the European Parliament’s legal affairs committee approved a wide-ranging report that outlines a possible framework under which humans would interact with AI and robots. These items stand out in the report:

  • The report states there is a need to create a specific legal status for robots, one that designates them as “electronic persons,” which effectively gives robots certain rights, and obligations. (This effectively would create a third type of personhood, apart from natural and legal persons).
  • Fearing a robot revolution, the report also wants to create an obligation for designers of AI systems to incorporate a “kill switch” into their designs.
  • As another safety mechanism the authors of the report suggest that Isaac Asimov’s three laws of robotics should be programmed into AI systems, as well. (1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.)
  • Finally, the report also calls for the creation of a European agency for robotics and AI that would be capable of responding to new opportunities and challenges arising from technological advancements in robotics.

It won’t be too long before Robot Law becomes part of the regular legal curriculum.

Sources:

 

Legal AI and Bias

Justice is blind, but legal AI may be biased.

Like many advanced technologies, artificial intelligence (AI) comes with its advantages and disadvantages. Some of the potentially negative aspects of AI regularly make headlines. There is a fear that humans could be replaced by AI, and that AI might take our jobs. (As pointed out in a previous article, lawyers are less at risk of such a scenario: AI would perform certain tasks, but not take jobs, as only 23% of the work lawyers do can be automated at present). Others, like Elon Musk, predict doomsday scenarios if we start using AI in weapons or warfare. And there could indeed be a problem there: what if armed robotic soldiers are hacked, or have bad code and go rogue? Some predict that superintelligence (where AI systems become vastly more intelligent than human beings) and the singularity (i.e. the moment when AI systems become self-aware) are inevitable. The combination of both would lead to humans being the inferior species, and possibly being wiped out.

John Giannandrea, who leads AI at Google, does not believe these are the real problems with AI. He sees another problem, and it happens to be one that is very relevant to lawyers. He is worried about intelligent systems learning human prejudices. “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea said.

The case that comes to mind is COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It is often used in criminal cases in the US by judges and parole boards. ProPublica is a Pulitzer Prize winning non-profit news organization. It decided to analyse how correct COMPAS was in its predictions. They discovered that COMPAS’ algorithms correctly predicted recidivism for black and white defendants at roughly the same rate. But when the algorithms were wrong, they were wrong in different ways for each race. African American defendants were almost twice as likely to be labelled a higher risk where they did not actually re-offend. And for Caucasian defendants the opposite mistake was made: they were more likely to be labelled lower risk by the software, while in reality they did re-offend. In other words, ProPublica discovered a double bias in COMPAS, one in favour of white defendants, and one against black defendants. (Note that COMPAS disputes those findings and argues the data were misinterpreted).

The problem of bias in AI is real. AI is being used in more and more industries, like housing, education, employment, medicine and law. Some experts are warning that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it. “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems,” Giannandrea added.

Giannandrea correctly points out that the underlying problem is a problem of lack of transparency in the algorithms that are being used. “Many of the most powerful emerging machine-learning techniques are so complex and opaque in their workings that they defy careful examination.”

Apart of all the ethical implications, the fact that it is unclear how the algorithms come to a specific conclusion could have legal implications. The U.S. Supreme Court might soon take up the case of a Wisconsin convict who claims his right to due process was violated when the judge who sentenced him consulted COMPAS. The argument used by the defence is that the workings of the system were opaque to the defendant, making it impossible to know for what arguments a defence had to be built.

To address these problems, a new institute, the AI Now Institute (ainowinstitute.org) was founded. It produces interdisciplinary research on the social implications of artificial intelligence and acts as a hub for the emerging field focused on these issues. Their main mission consists of “Researching the social implications of artificial intelligence now to ensure a more equitable future.” They want to make sure that AI systems are sensitive and responsive to the complex social domains in which they are applied. To that end, we will need to develop new ways to measure, audit, analyse, and improve them.

Sources:

An introduction to blockchain

You probably have heard about Bitcoin, and you may have heard about the underlying technology, Blockchain, too. But, if your clients asked you what Bitcoin and Blockchain are, could you explain it to them? In this first of two articles, we’ll explore what Blockchain is. In a follow-up article, we will look what its relevance is for lawyers.

Bitcoin is probably the best-known cryptocurrency. A cryptocurrency is a digital or virtual currency that uses encryption techniques to a) regulate the generation of units of currency, and b) to verify the transfer of funds. What is important about these virtual currencies is that, unlike regular currencies, they operate independently of any government or central bank. Blockchain is the technology that was developed to make this possible. And the technology has the potential to revolutionise the way we do business.  Some experts even predict that Blockchain will replace the Internet for doing business online. Add to that, that Blockchain technology is not just useful for virtual currencies. Many other uses are possible, including legal ones like self-executing smart contracts.

To understand what Blockchain is, it is necessary to understand why it was created in the first place, and that is to solve the problem of “double spend”. If I go to a bookstore and buy a printed book, the book is physically transferred from the bookstore to me. The bookstore no longer has that copy, I have. And it’s possible for the bookstore to run out of copies. But if I buy an eBook online, things are different. What I get is a copy, and the online bookstore I got it from can still sell an unlimited amount of copies. Digital products can be copied, infinitely. And that creates a special problem for digital currencies. What prevents me from spending one amount of 20 € three times, online? In this example, the bank does: if I go to an online store and spend 20 €, the bank will take that amount off my account and hand it over, often through intermediaries like credit card companies, to the store owner. But the whole purpose of Bitcoin was to be able to operate without any central bank or government. So, how can we make sure one Bitcoin isn’t spent more than once by the same owner? Blockchain is the technology that was invented to solve that problem. The way it is done, is by creating a secure register or ledger that keeps track of all transactions, and of which copies are distributed over a peer-to-peer network.

Blockchain can be described as a distributed ledger technology (DLT) that consists of a distributed data structure and algorithms, which create a decentralized ledger or registry of transactions, which is both permanent / immutable, and secure.

A distributed ledger technology

Instead of keeping one central register or ledger, Blockchain consists of a decentralized network of volunteer-run nodes, each of which keep an identical copy of the register. (The idea was that, to work with bitcoins, you need a bitcoin wallet, and every owner of a wallet should have a copy of the register). Each transaction that is registered gets a timestamp, and the network uses algorithms that ‘vote’ on the order in which transactions occur, and ensures that each transaction is unique.

Blockchain is secure and permanent / immutable

“Once a majority of nodes reaches consensus that all transactions in the recent past are unique (that is, not double spent), they are cryptographically sealed into a block. Each new block is linked to previously sealed blocks to create a chain of accepted history, thereby preserving a verified record of every spend.” (ZDNet). This ‘cryptographic sealing’ uses hash functions and digital signatures that work in one way only. Let’s take an example: on 4 August 2017, at 8:15:0000 AM UCT, wallet X transfers 1 bitcoin to wallet Y. Just as is the case with an online money transfer, this information is structured in a specific way. To that set of data, a one-way encryption is applied, that is irreversible. The result of the encryption is a unique string, let’s say, fictitiously, W(#MD31NAP^FV12. It is impossible to read from that string who paid what to whom. But if X claims he paid Y 1 bitcoin on 4 August 2017, at 8:15:0000 AM UCT, to Y, then the ‘key’ will have to be W(#MD31NAP^FV12. So, if that key is found in the ledger matching that timestamp, it is irrefutable proof that indeed that transaction occurred in that way. If X claims he paid 2 bitcoins, then the key would be different.

Blockchain uses ‘consensus algorithms’ to make sure each transaction is unique, which is needed in case of conflicting data. Because of the algorithms it uses, Blockchain comes as close to being unhackable as currently is possible. And while there have been instances where Bitcoin was hacked, Blockchain itself, i.e. the underlying technology, has not. Still, the consensus mechanism has one inherent risk, which has been called the 51%-problem. The nodes in the network vote by majority. If a hacker would succeed in taking over 51% of the nodes in the network, then he could start manipulating the votes to change records, i.e. replace them by modified ones. It would still be a hard thing to accomplish, extra security mechanisms have been built in. Which leads us to the next item.

Blockchain is permanent and immutable: each block of data in the Blockchain is time-stamped, and can only be added to the chain after the time stamp is applied and verified by the distributed computers across the chain. The practical effect of this is that a block of data can never be changed retrospectively, as all subsequent records would have to be modified as well.

Blockchain has many advantages: its decentralization makes it independent and secure. Because the whole process is managed by algorithms and no human interventions are necessary, the transaction fees are lower, and transactions themselves can be conducted more quickly.

In short, Blockchain technology has the potential to disrupt several markets, and lawyers will have to be prepared for that. There already are legal applications for the technology, as well. We will have a look at all of that, in a follow-up article.

 

Sources: