Tag Archives: artificial intelligence

An introduction to AI computers for lawyers

AI Computers are being called the biggest development in the PC industry in 25 years. Experts believe they could also trigger a refresh cycle in the PC-industry. In this article, we will answer the following questions. What are AI computers? What are the benefits of AI computers, and what are the benefits for lawyers? Do you, as a lawyer need to get yourself one? What are the challenges and limitations for legal work?

What are AI computers?

So, what are AI computers? The term was launched by Intel. They describe it as follows: an AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. The GPU and CPU can also process these workloads, but the NPU is especially good at low-power AI calculations. The AI PC represents a fundamental shift in how our computers operate. It is not a solution for a problem that didn’t exist before. Instead, it promises to be a huge improvement for everyday PC usages.

In other words, AI PCs are regular personal computers that are supercharged with specialized hardware and software. These are specifically designed to handle tasks involving artificial intelligence and machine learning. When it comes to the hardware, what stands out is the presence of an NPU, i.e., a Neural Processing Unit. Its job is to accelerate AI workloads, particularly those that require real-time processing, like voice recognition, image processing, and deep learning applications.

AI PCs also run specialized software stacks, frameworks, and libraries tailored for Artificial intelligence and Machine Learning workloads. “The distinction between AI software and ‘normal’ software lies in how each type of application processes the work you ask it to do. A conventional application just provides pre-defined tools not unlike the specialty tools in a toolbox: you must learn the best way to use those tools, and you need personal experience in using them to be effective on the project at hand. It’s all up to you, every step of the way. In contrast, AI software can learn, make decisions, and tackle complex creative tasks in the same way a human might. That learning capability gives you a new kind of tool that can simply do the job for you at your request, because it has been trained to do so. This fundamental difference enables AI software to automate complex tasks, offer personalized experiences, and process vast amounts of data efficiently, transforming how we interact with our computers.”

Benefits of AI computers

Why were AI computers created in the first place? Generative AI has become extremely popular. But it puts high workloads on the cloud servers AI is using. The idea is to share that workload with the PCs of the users. And for that, you need to have powerful PCs with the necessary hardware and software. In short, AI computers are beneficial for the users, as well as for the manufacturers and AI service providers.

Benefits for users

Experts have identified many potential benefits for the users. AI PCs can boost productivity, enhance creativity, and improve user experience. Below are some of the key advantages the literature mentions, in random order.

Enhanced and accelerated performance for AI Tasks: AI PCs are equipped with hardware specifically designed to tackle demanding AI applications. This translates to faster processing of complex calculations and data analysis, crucial for tasks like video editing, scientific simulations, and training AI models. This acceleration can significantly speed up the training and inference of deep learning models. And other applications like video conferencing, e.g., can also greatly benefit from this enhanced performance.

Improved efficiency and automation: AI features can automate repetitive tasks, freeing you up for more strategic work. Imagine software that automatically categorizes your files or optimizes battery life based on usage patterns.

Improved power efficiency: AI accelerators like TPUs are designed to be power-efficient, consuming less energy while delivering high performance. Laptop batteries, e.g., will last longer before needing recharging. AI PCs can lead to lower operating costs and a smaller environmental footprint.

Personalized User Experience: AI can learn your preferences and adjust settings accordingly. Brightness, keyboard responsiveness, and even video call framing could adapt to your needs, creating a more comfortable and efficient work environment.

Boosted Creativity: some AI PCs come with built-in creative tools that can generate ideas, translate languages, or even write different creative text formats based on your prompts. This can be a game-changer for designers, writers, and anyone looking for a spark of inspiration.

Enhanced Security: AI-powered security features can constantly monitor for threats and potential breaches, offering an extra layer of protection for your data.

Benefits for chip manufacturers and for service providers

The new AI computers do not only benefit the users. As mentioned before, having part of the workload done on the users’ side, also considerably reduces the workload on the servers of the AI service providers. One expert even estimates that, “By end of 2025, 75% of enterprise-managed data will be processed outside the data centre.” So, service provides will have to invest less in infrastructure.

At the same time, AI PCs can be useful in the data centre, too. Two important benefits they offer are scalability and a faster time-to-market. Many AI PCs support multiple AI accelerators, allowing for scaling up the computational power by adding more accelerators as needed. This scalability enables handling larger and more complex AI models and workloads. The accelerated performance of AI PCs can also significantly reduce the time required for training AI models, enabling faster iteration and deployment cycles for AI applications and solutions.

The introduction of a new type of personal computers is of course also good news for the manufacturers, as it creates a new – and booming – market. It should not come as a surprise then, that all major chip manufacturers like Intel, Nvidia, AMD, and Qualcomm have started making NPU chips. Apple, too, has announced new chips that are AI optimized. It is safe to assume that soon all new PCs, laptops, and tablets will be AI computers.

Benefits for lawyers

All of this then begs the questions, do you, as a lawyer, need one? Well, apart from the abovementioned benefits, AI computers can offer lawyers specific benefits, too. They can, e.g., significantly enhance the efficiency of legal practices by automating routine tasks such as document review, legal research, eDiscovery, and contract analysis. Experts anticipate the following benefits.

Improved Legal Research: AI can analyse vast amounts of legal documents, regulations, precedents, and case law, helping lawyers identify relevant precedents and arguments much faster. This can save significant time and effort compared to traditional research methods.

Contract analysis and enhanced due diligence: AI can sift through contracts and financial records, highlighting potential risks and areas requiring closer scrutiny during due diligence processes. This typically can be a time-consuming task for lawyers, where AI can do it very fast. Add to that that it can improve the accuracy and efficiency of legal reviews.

Legal document analysis, review, and drafting assistance: AI-powered tools can help lawyers draft legal documents by suggesting language, identifying inconsistencies, and ensuring compliance with regulations. AI models can also be trained to analyse and extract relevant information from large volumes of legal documents, contracts, and case files. The computational power of AI PCs can speed up this process significantly.

Predictive analytics: with the help of AI PCs, lawyers can develop predictive models to analyse the potential outcomes of legal cases based on historical data and various factors.

Natural language processing (NLP): AI PCs can be used to train and deploy NLP models for tasks like legal document summarization, information extraction, and sentiment analysis.

Challenges and limitations for legal work

At present, however, AI computers are still facing some challenges and limitations when it comes to legal work. While AI PCs can provide computational advantages, many legal applications may not require the full power of these specialized systems. For routine legal work, such as drafting documents or conducting basic research, regular desktop or laptop computers might suffice.

AI computers still have limited judgment and creativity. The core tasks of lawyers often involve legal reasoning, strategy, and creative problem-solving, areas where AI is still not very advanced. AI PCs can’t replace a lawyer’s ability to analyse complex situations, develop persuasive arguments, or adapt to unexpected circumstances in court.

There also is the issue of data dependence and accuracy: the effectiveness of AI tools heavily relies on the quality and completeness of the data they’re trained on. Legal data can be complex and nuanced, and errors in the data can lead to inaccurate or misleading results.

The benefits may not justify the higher costs. AI PCs can be significantly more expensive than traditional PCs. For lawyers who don’t handle a high volume of complex legal matters that heavily rely on AI-powered research or due diligence, the cost may therefore not be justified.

CONCLUSION

AI PCs can be a valuable tool for lawyers, especially for tasks like legal research and due diligence. However, they shouldn’t be seen as a replacement for human lawyers. AI is best used to augment a lawyer’s skills and expertise, not replace them. And at present, AI computers may be overkill when it comes to day-to-day legal work, where existing computers can handle the workload and the extra cost of an AI pc is not justified.

It is also important to consider that the technology used in AI computers is a new and evolving tech. AI PCs are a relatively new concept, and the functionalities are still under development. The “killer application” that justifies the potentially higher cost might not be here yet. Add to that, that to fully benefit from AI features, you’ll need compatible software that can leverage the AI capabilities of your PC.

The decision to invest in AI PCs for legal work would depend on factors such as the specific use cases, the volume of data or workload, the complexity of the AI models required, and the potential return on investment. Law firms or legal departments with a significant focus on AI-driven legal technologies may find AI PCs more beneficial than those with more traditional workflows. But for many lawyers, a traditional PC with good legal research software might still be the most practical solution.

 

Sources:

 

The EU AI Act

In previous articles, we discussed the dangers of AI and the need for AI regulations. On 13 March 2024, the European Parliament approved the “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.” The act was a proposal by the EU Commission of 21 April 2021. The act is usually referred to by its short name of the Artificial Intelligence Act, or the EU AI Act. In this article we look at the following questions: what is the EU AI Act? What is the philosophy of the EU AI Act? We discuss the limited risk applications and the high-risk applications. Finally, we also look at the EU AI Act’s entry into force and the penalties.

What is the EU AI Act?

As the full title suggests, it is a regulation that lays down harmonised rules on artificial intelligence across the EU. Rather than focusing on specific applications, it deals with the risks that applications pose, and categorizes them accordingly. The Act imposes stringent requirements for high-risk categories to ensure safety and fundamental rights are upheld. The Act’s recent endorsement follows a political agreement reached in December 2023, reflecting the EU’s commitment to a balanced approach that fosters innovation while addressing ethical concerns.

Philosophy of the EU AI Act

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular for small and medium-sized enterprises (SMEs). The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

The AI Act ensures that Europeans can trust what AI has to offer. Because AI applications and frameworks can change rapidly, the EU chose to address the risks that applications pose. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes. The Act distinguishes four levels of risk:

  • Unacceptable risk: applications with unacceptable risk are never allowed. All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
  • High risk: to be allowed high risk applications must meet stringent requirements to ensure safety and fundamental rights are upheld. We have a look at those below.
  • Limited risk: applications are considered to pose limited risk when they lack transparency, and the users of the application may not know what their data are used for or what that usage implies. Limited risk applications can be allowed if they comply with specific transparency obligations.
  • Minimal or no risk: The AI Act allows the free use of minimal-risk and no risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

Let us have a closer look at the limited and high-risk applications.

Limited Risk Applications

As mentioned, limited risk refers to the risks associated with a lack of transparency. The AI Act introduces specific obligations to ensure that humans are sufficiently informed when necessary. E.g., when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.

High Risk Applications

Under the EU AI Act, high-risk AI systems are subject to strict regulatory obligations due to their potential impact on safety and fundamental rights.

What are high risk applications?

As mentioned, all AI systems considered a clear threat to the safety, livelihoods and rights of people are considered high-risk. These systems are classified into three main categories: a) those covered by EU harmonisation legislation, b) those that are safety components of certain products, and c) those listed as involving high-risk uses. Specifically, high-risk AI includes applications in critical infrastructure, such as traffic control and utilities management, biometric and emotion recognition systems. It also applies to AI used in education and employment for decision-making processes.

What are the requirements for high-risk applications?

High-risk AI systems under the EU AI Act are subject to a comprehensive set of requirements designed to ensure their safety, transparency, and compliance with EU standards. These systems must have a risk management system in place to assess and mitigate potential risks throughout the AI system’s lifecycle. Data governance and management practices are crucial to ensure the quality and integrity of the data used by the AI, including provisions for data protection and privacy. Providers must also create detailed technical documentation that covers all aspects of the AI system, from its design to deployment and maintenance.

Furthermore, high-risk AI systems require robust record-keeping mechanisms to trace the AI’s decision-making process. This is essential for accountability and auditing purposes. Transparency is another key requirement, necessitating clear and accessible information to be provided to users and ensuring they understand the AI system’s capabilities and limitations. Human oversight is mandated to ensure that AI systems do not operate autonomously without human intervention, particularly in critical decision-making processes. Lastly, these systems must demonstrate a high level of accuracy, robustness, and cybersecurity to prevent errors and protect against threats.

The EU AI Act’s entry into force

The act will enter into force two years after it was approved, i.e., on 13 March 2026. This gives member states the opportunity to implement compliant legislation. It also gives providers a two-year window to adapt to the regulation.

The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation with the member states.

Penalties

The EU AI Act enforces a tiered penalty system to ensure compliance with its regulations. For the most severe violations, particularly those involving prohibited AI systems, fines can reach up to €35 million or 7% of the company’s worldwide annual turnover, whichever is higher. Lesser offenses, such as providing incorrect, incomplete, or misleading information to authorities, may result in penalties up to €7.5 million or 1% of the total worldwide annual turnover. These fines are designed to be proportionate to the nature of the infringement and the size of the entity, reflecting the seriousness of non-compliance within the AI sector.

Conclusion

The EU AI Act represents a significant step in the regulation of artificial intelligence within the EU. It sets a precedent as the first comprehensive legal framework on AI worldwide. The Act mandates a high level of diligence, including risk assessment, data quality, transparency, human oversight, and accuracy for high-risk systems. Providers and deployers must also adhere to strict requirements regarding registration, quality management, monitoring, record-keeping, and incident reporting. This framework aims to ensure that AI systems operate safely, ethically, and transparently within the EU. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.

 

Sources:

 

The 2024 law firm

Usually, at the beginning of a new year, we look back at the trends in legal technology of the last year. Unfortunately, the reports that are needed to do that are not available yet. So, instead, with Lamiroy Consulting turning 30 in February 2024, we will have a look at the 2024 law firm, and at how law firms have evolved over the last decades. We will discuss a range of topics concerning technology and automation in the 2024 law firm, including artificial intelligence, communications. We will see how the cloud, remote work, and virtual law offices have changed law firms, etc.

Technology and automation in the 2024 law firm

Let us go back in time. The early 80s saw the introduction of the first personal computers and home computers. Word Processors had been around since a few years before that. They were found in only a very small minority of law firms at the time. The Internet as we know it, did not exist yet. By the time 1994 came, things had started to change, and a legal technology revolution was on the horizon. Fast forward to 2024. Law firms that don’t use computers or equivalent mobile devices are an endangered – if not extinct – species. Many operational processes in the law firm have been automated.

So, it is safe to say that over the past decades, technology and automation have transformed the legal industry in many ways. According to a report by The Law Society, some of the factors that have contributed to this transformation include increasing workloads and complexity of work, changing demographic mix of lawyers, and greater client pressure on costs and speed.

Two of the most significant changes has been the introduction of the internet with its cloud technologies and of Artificial Intelligence (AI). Most of the evolutions described below have been made possible by the Internet.

Artificial Intelligence

The introduction of Artificial Intelligence (AI) has been one of the main factors driving a substantial transformation of the legal industry. One of its main benefits has been that it notably improved attorney efficiency. AI plays a key role in tasks such as sophisticated writing, research, and document drafting, significantly expediting processes that traditionally could take weeks.

Communications

Law firms have moved from traditional methods of communication such as snail mail to more modern methods. These include email, client portals, and cloud-based communications, like Teams and SharePoint. They allow people to share documents with different levels of permissions, ranging from reading and commenting to editing.

Client portals have become increasingly popular in recent years, allowing clients to access their legal documents and communicate with their attorneys in real-time. This has made it easier for clients to stay informed about their cases and has improved the efficiency of law firms.

Cloud, Remote work, and Virtual Law Office

The legal industry has experienced a notable surge in remote work and virtual law offices. Much of this has been particularly accelerated by the COVID-19 pandemic. Virtual law offices, facilitated by cloud-based practice management software, offer attorneys the flexibility to work from anywhere, leading to increased flexibility and reduced overhead costs for law firms. The cloud has played a crucial role in this shift. It allows virtual lawyers to run fully functional law firms on any device with significantly lower costs compared to on-premise solutions.

Digital Marketing and Online Presence

The legal industry has also witnessed major changes in its marketing practices over the past decades, adapting to the internet era. Recent studies indicates that one-third of potential clients initiate their search for an attorney online. This emphasizes the importance of a strong online presence for law firms to stay competitive. Law firms now prioritize digital marketing through channels like social media, email, SEO, and websites. Whether marketing the entire firm or individual lawyers, a robust digital strategy is essential for establishing credibility and connecting with potential clients. Personal branding is crucial for individual lawyers, highlighting achievements and values, while law firms should focus on building trust through a comprehensive digital marketing strategy.

Billing and Financial Changes in the 2024 Law Firm

Another area where the legal industry has undergone significant changes is in billing and financial practices. In the past, law firms relied on traditional billing methods such as paper invoices and checks. However, with the advent of technology, law firms have shifted to digital billing methods such as electronic invoices and online payment systems. This has made the billing process more efficient and streamlined. In addition to digital billing methods, law firms have also adopted new financial practices such as trust accounting. Trust accounting is a method of accounting that is used to manage funds that are held in trust for clients. This is particularly important for law firms that handle client funds, such as personal injury or real estate law firms.

Over the last decades, alternative fee arrangements (AFAs) have also significantly impacted the legal industry. They did so by offering pricing models distinct from traditional hourly billing. AFAs, including fixed fees, contingency fees, and blended rates, have gained popularity as clients seek greater transparency and predictability in legal fees. A recent study identified 22 law firms excelling in integrating AFAs into their service delivery, earning praise from clients for improved pricing and value. The study underscores a client preference for firms providing enhanced pricing and value. This emphasizes how AFAs not only contribute to better relationships between law firms and clients but also demonstrate the firms’ commitment, fostering trust and credibility.

Legal Research and Analytics

Legal research and analytics have also been revolutionised over the last decades, making the process more efficient and accessible. We have seen primary and secondary legal research publications become available online. Facilitated by information and communication technologies, this has replaced traditional storage methods like compact discs or print media. This shift has not only increased accessibility but also allowed legal professionals to conduct more comprehensive and accurate research. Furthermore, the emergence of legal analytics has empowered professionals to enhance legal strategy, resource management, and matter forecasting by identifying patterns and trends in legal data.

Client Expectations

Another notable change is that clients’ expectations of lawyers have evolved significantly. A recent survey highlights that 79% of legal clients consider efficiency and productivity crucial, indicating a demand for more effective legal services. Additionally, clients now expect increased accessibility and responsiveness from their lawyers, prompting law firms to integrate new technologies such as client portals and online communication tools. Transparency in fees and billing practices is also a priority for clients, leading to the growing adoption of alternative fee arrangements by law firms. (Cf. above).

Globalization

Finally, globalization has significantly impacted the legal industry. It forced law firms to adapt to a changing global landscape and heightened demand for legal services across borders. Many European law firms, these days, are members of some international legal network, with branches in many EU countries. And a recent study highlights the emergence of a new corporate legal ecosystem in emerging economies like India, Brazil, and China. This presents opportunities for law firms to expand globally. In response to the globalization of business law and the increasing demand from transnational companies, law firms are transforming their practices. They do so by merging across borders and creating mega practices with professionals spanning multiple countries. This shift has prompted the adoption of new managerial practices and strategies to effectively manage global operations within these law firms.

Sources:

Microsoft SharePoint Syntex

In this blog post, we will explore Microsoft SharePoint Syntex. We focus on the following questions: What is Microsoft SharePoint Syntex? What can it do? Is Microsoft SharePoint Syntex already available? What are the benefits of Microsoft SharePoint Syntex? And what are the benefits for lawyers?

In the last year, generative AI has been making headlines. (See, e.g., our articles on ChatGPT for lawyers, on Generative AI, and on the dangers of AI). Many software companies have started integrating generative AI into their products and services. Microsoft is no exception. Two of their new generative AI services stand out: CoPilot and SharePoint Syntex. This article is about SharePoint Syntex. Our next article will be about CoPilot.

What is Microsoft SharePoint Syntex?

So, what is Microsoft’s SharePoint Syntex? It is a new product that uses advanced AI and machine teaching to help you capture, manage, and reuse your content more effectively. As the name suggests, it is in essence an add-on feature for SharePoint. (Our blog also has an article on using SharePoint in law firms). Once it is installed, it can be used in some other programs as well. (See below).

Microsoft describes SharePoint Syntex as a content understanding, processing, and compliance service. It uses intelligent document processing, content artificial intelligence (AI), and advanced machine learning. This allows it to automatically and thoughtfully find, organize, and classify documents in your SharePoint libraries, Microsoft Teams, OneDrive for Business, and Exchange. With Syntex, you can automate your content-based processes—capturing the information in your law firm’s documents and transforming that information into working knowledge.

Syntex is the first product from Project Cortex. That is a Microsoft 365 initiative that aims to empower people with knowledge and expertise in the apps they use every day.

What can it do?

Microsoft Syntex offers several services and features to help you enhance the value of your content, build content-centric apps, and manage content at scale. Some of the main services and features are:

Content assembly: You can automatically generate standard repetitive business documents, such as contracts, statements of work, service agreements, letters of consent, and correspondence. You can do all these tasks quicker, more consistently, and with fewer errors in Syntex. You create modern templates based on the documents you use most. You then use those templates to automatically generate new documents using SharePoint lists or user entries as a data source.

Prebuilt document processing: You can use a prebuilt model to save time processing and extracting information from contracts, invoices, or receipts. Prebuilt models are pretrained to recognize common documents and the structured information in the documents. Instead of having to create a new document processing model from scratch, you can use a prebuilt model to jumpstart your document project.

Structured and freeform document processing: You can use a structured model to automatically identify field and table values. It works best for structured or semi-structured documents, such as forms and invoices. You can use a freeform model to automatically extract information from unstructured and freeform documents, such as letters and contracts where the information can appear anywhere in the document. Both structured and freeform models use Microsoft Power Apps AI Builder to create and train models within Syntex.

Content AI: You can understand and gather content with AI-powered summarization, translation, auto-assembly, and annotations incorporated into Microsoft 365 and Teams.

Content apps: You can extend and develop content apps with high-volume containers, data, and rich APIs.

Content management: You can analyse and protect content through its lifecycle with AI powered security and compliance, backup/restore and advanced content management.

Is Microsoft SharePoint Syntex already available?

SharePoint Syntex was released on 1 October 2023, and is available in all countries where Microsoft 365 is offered. So, if you are a CICERO LawPack user, you can start using it already. But note that there are some differences in the availability of languages and pricing for SharePoint Syntex in Europe.

SharePoint Syntex supports 21 languages for document understanding models and 63 languages for form processing models. (The article in the Microsoft Tech Community on the availability, which is listed in the sources below, has the full list of supported languages). All languages in which Microsoft 365 is available in Europe are available for Syntex within Europe. This does not mean, however, that all languages are available in all regions. For example, some languages are only available in the US region, such as Arabic, Hebrew, Hindi, Thai, and Vietnamese.

The pricing of SharePoint Syntex depends on the type of licensing and the number of transactions, as well as on the region and currency. There are two options for licensing: per-user and pay-as-you-go. Per-user licensing costs $5 per user per month in the US and allows unlimited usage of Syntex services. The price in EUR may differ depending on the exchange rate and local taxes. Pay-as-you-go licensing charges based on the total number of pages processed by Syntex, with different rates for unstructured, structured, and prebuilt document processing.

According to the Microsoft website, the price of SharePoint Syntex in Belgium is €7,90 per user per month for per-user licensing, and €0,04 per transaction for unstructured document processing, €0,01 per transaction for prebuilt document processing, and €0,04 per transaction for structured and freeform document processing for pay-as-you-go licensing. These prices do not include VAT and may vary depending on the currency exchange rate and the Azure subscription plan. You can find the exact price of SharePoint Syntex in your region and currency on the Microsoft 365 Enterprise Licensing page (listed below in the sources).

What are the benefits of Microsoft SharePoint Syntex?

Microsoft Syntex can help your law firm automate business processes, improve search accuracy, and manage compliance risk. With content AI services and capabilities, you can build content understanding and classification directly into the content management flow. Some of the benefits of using Microsoft Syntex are:

Increased productivity: Your law firm can save time and resources by automating repetitive tasks such as document generation, extraction, classification, tagging, indexing, summarization, translation, etc. You can also access your content faster and easier by using intelligent search capabilities that leverage metadata and AI insights.

Improved quality: You can reduce errors and inconsistencies by using standardized templates, prebuilt models, or custom models that suit your specific needs. You can also ensure that your content is accurate, relevant, and up to date by using AI-powered analytics and feedback mechanisms.

Enhanced security: You can protect your sensitive data by using AI-powered security and compliance features that help you identify risks, apply policies, enforce retention rules, monitor activity, audit changes, etc. You can also backup and restore your content in case of accidental deletion or corruption.

What are the benefits for lawyers?

For lawyers in particular, Microsoft Syntex can offer some additional benefits that can help them streamline their legal workflows, improve their client service, and reduce their liability exposure.

Faster contract review: Lawyers can use prebuilt or custom models to automatically extract key information from contracts such as parties, clauses, terms, dates, amounts, etc. They can also use content assembly to automatically generate contracts based on templates and data sources. This can help them speed up their contract review process, avoid missing important details or deadlines, and ensure consistency across their contracts.

Easier knowledge management: Your law firm can use content AI to automatically summarize, translate, annotate, tag, index their legal documents such as cases, opinions, briefs, memos etc. They can also use intelligent search to quickly find relevant information across their SharePoint libraries or Teams channels. This can help them manage their legal knowledge more effectively, access the information they need when they need it, and share it with their colleagues or clients.

Better compliance and risk management: It is possible to use content management to automatically apply security and compliance policies to their legal documents based on their sensitivity, confidentiality, or retention requirements. Lawyers can also use AI-powered analytics and monitoring to identify potential issues, conflicts, or breaches in their documents and take appropriate actions. This can help them comply with their ethical and legal obligations, protect their client’s interests, and reduce their liability exposure.

 

Sources:

 

Ambient Computing for lawyers

In our previous article, we discussed ambient computing: what it is, and what the benefits and challenges are. In this article we discuss what the relevance of ambient computing is for lawyers. We look at ambient law, which deals with the legal aspects of ambient computing. Then we ask ourselves, “what are the benefits of ambient computing for lawyers?”, and “what are the challenges?”. But first we start with a short recap on what ambient computing is.

A short recap: what is ambient computing?

In our previous article, we explained that “ambient computing is the idea of embedding computing power into everyday objects and environments, to make them smart, connected, and responsive. The goal is to make it easier for users to take full advantage of technology without having to worry about the details. (…) Ambient computing relies on a variety of technologies, such as sensors, artificial intelligence, cloud computing, voice recognition, gesture control, and wearable devices, to create a seamless and personalized user experience. Ambient computing devices are designed to be unobtrusive and blend into the background, so that users can focus on their tasks and goals rather than on the technology itself.” As such, the concept of ambient computing is closely related to the concept of the Internet of Things.

Examples of ambient computing technology are found in smart homes, cars, business premises, as well as other domains, such as health care, education, entertainment, and transportation, etc.

So, now that we know what ambient computing is, we can focus on the next question: what does ambient computing mean for lawyers and the legal profession? Three items come to mind: what are the legal aspects of ambient computing? What are the benefits for lawyers? What are the risks and challenges for lawyers?

Ambient Law: the legal aspects of ambient computing

When the Internet of Things was starting to take form, the term ambient law was introduced to refer to the legal aspects of using ambient technology. There are four main areas where legal issue can arise, and we can pair them in two sets of two. On the one hand, there is data privacy and security. On the other hand, there is liability and accountability.

Data Privacy and Security

Ambient computing involves collecting, processing, and sharing large amounts of personal and sensitive data from various sources and devices, which raises significant privacy and security concerns.

Privacy: In our previous article we wrote that ambient computing collects vast amounts of data about users’ behaviour, preferences, location, health, and more. This data can be used for beneficial purposes, such as improving services and personalization. But it can also be misused or compromised by malicious actors or third parties. Or they can be sold to third parties, often without the users’ knowledge or consent. Many car manufacturers, e.g., are guilty of this.

In this context, it is worth referring to the SWAMI project, which stands for Safeguards in a World of Ambient Intelligence. This project took a precautionary approach in its exploration of the privacy risks in Ambient Intelligence (AmI) and sought ways to reduce those risks.

The project discovered that several “dark scenarios” where possible that would have negative implications for privacy protection. It identified various threats and vulnerabilities. Legal analysis of these scenarios also showed there are shortcomings in the current legal framework and that the current legislation cannot provide adequate privacy protection in the AmI environment.

The Project concluded that a new approach to privacy and data protection is needed, based on control and responsibility rather than on restriction and prohibition.

Security: Again, there are several aspects to the security side of ambient computing. On the one hand, all the personal data it collects must be protected. Another side is that in essence each new ambient device increases the security risk. Ambient technologies can expose users’ devices and data to cyberattacks or physical tampering. This can compromise users’ safety and functionality of their devices. Cars and baby monitors, e.g., appear to be easy targets for hackers.

There have been initiatives already to tackle the possible security risks inherent in ambient computing. Relevant data security laws generally focus on data protection, cybersecurity, cross-border data transfers, the rights of the data subject, and on penalties for non-compliance.

Liability and accountability

The other two aspects are legal liability and accountability: Ambient computing involves delegating some decisions and actions to autonomous agents or systems that may not be fully transparent or predictable. This raises questions about who is responsible and liable for the outcomes or consequences of those decisions or actions, especially when they cause harm or damage to others. (In a previous article, we looked at robot law and looked at who would be responsible for a robot’s actions: is it the robot, the owner, or the manufacturer?)

As we are dealing with new technologies that are literally all around us, legal liability and accountability in ambient computing are complex issues.

What are the benefits of ambient computing for lawyers?

In our previous article, we highlighted some general benefits of ambient computing. These include convenience, efficiency, engagement, and empowerment. More specifically for lawyers, ambient computing can offer three groups of benefits.

A first set of benefits has to do with improving productivity and efficiency. Ambient computing technology can automate and streamline many routine tasks and processes that lawyers perform. Some law firm management software can already be voice controlled and work together with artificial assistants. (Our article on virtual legal assistants discusses this, too).

Ambient computing can also enhance client experience and satisfaction. It can enable lawyers to provide more personalized, responsive, and proactive service to their clients, by leveraging data and insights from various sources and devices.

Finally, ambient computing can open up new business models and opportunities. It can create new types of services, products, and platforms that leverage ambient intelligence and connectivity.

What are the challenges?

Ambient computing also poses some challenges and risks for lawyers, including the ones we already mentioned above when talking about ambient law.

When it comes to protecting data privacy and security, lawyers have a duty to protect the confidentiality and integrity of their clients’ data, as well as their own data. Therefore, they need to ensure that they comply with the applicable laws and regulations on data protection, such as the GDPR. They also must make sure their ambient technology complies with the ethical standards and best practices of their profession. Furthermore, they need to be aware of the potential threats and vulnerabilities that ambient computing introduces, such as data breaches, cyberattacks, unauthorized access, etc., and take appropriate measures to prevent or mitigate them.

For lawyers, too, there are aspects of legal liability and accountability. Ambient computing involves delegating some decisions and actions to autonomous agents or systems that may not be fully transparent or predictable. This raises questions about who is responsible and liable for the outcomes or consequences of those decisions or actions, especially when they cause harm or damage to others. Lawyers need to understand the legal implications and risks of using ambient computing in their practice or advising their clients on it. They also need to ensure that they have adequate contracts, policies, insurance, etc., to cover any potential liability or claims that may arise from ambient computing.

Finally, ambient computing may force lawyers to adapt to changing roles and skills. Ambient computing may disrupt or transform some aspects of the legal profession or industry, by creating new demands or expectations from clients or stakeholders. Lawyers need to be prepared to adapt to these changes and embrace new roles and skills that ambient computing requires or enables. For example, they may need to become more tech-savvy or data-driven, collaborate more with other professionals or disciplines, or specialize in new areas or domains related to ambient computing.

Conclusion

Ambient computing is an emerging trend that has significant implications for lawyers and the legal profession. Ambient computing can offer many benefits for lawyers who want to improve their practice and service delivery. However, it also poses some challenges and risks that lawyers need to address carefully. Lawyers who want to embrace ambient computing need to be aware of the legal and regulatory aspects of ambient computing in their jurisdiction or context. They also need to be proactive in learning and adopting the best practices and tools that ambient computing provides or demands.

Sources:

Virtual Legal Assistants

In this article, we discuss virtual legal assistants (VLAs). We answer questions like, “What are virtual legal assistants?”, “What services do they offer?”, “What are the benefits of using virtual legal assistants?”, and “What are the limitations?”. We also have a look at some statistics.

What are virtual legal assistants?

When we read articles on virtual legal assistants, we discover that the term is used in different ways. Some definitions restrict it to physical persons who work remotely and to whom purely administrative tasks are outsourced. Most authors also include the work of (remote) paralegals, while others also include all the services offered by third-party (or alternative) legal service providers who may use AI-powered or technology-driven platforms like bots. So, in its widest sense, a virtual legal assistant is an assistant that remotely assists lawyers, law firms, and legal professionals with various tasks and processes. They typically work as subcontractors for the law firm.

What services do they offer?

Virtual legal assistants offer a wide range of services. They can streamline and facilitate client communications and interactions. E.g., they can answer frequently asked questions and provide updates on case status, while maintaining confidentiality and security. They can personalize client engagement. And if you work with VLA bots and/or with physical people in different locations, they can guarantee 24/7 availability and instant responses.

Virtual legal assistants can enhance legal research. They can help lawyers find relevant case law, statutes, regulations, and legal articles to support their arguments and build stronger cases. They provide instant access to legal knowledge.

VLAs can assist in drafting legal documents such as contracts, agreements, pleadings, and other legal correspondence, often by generating templates or suggesting content based on context. One area where VLA bots have been proven very useful is contract review. They can review contracts, highlight important clauses, identify potential risks, and ensure compliance with relevant laws.

Virtual legal assistants also contribute to facilitating and optimizing case management and workflow. They can organize and manage case-related information, deadlines, and tasks, streamlining the workflow for lawyers and legal teams. VLA bots can provide automated case updates.

Other areas where VLAs are useful include bookkeeping, billing, and time tracking. They can help lawyers track billable hours and manage invoicing for clients.

You can also hire a VLA for data entry.

There is the aspect of due diligence, as well. Virtual legal assistants can assist in conducting due diligence for mergers, acquisitions, or other transactions by analysing legal and financial data.

VLA bots are also useful for legal analytics. They can analyse large sets of legal data and provide insights into trends, patterns, and potential outcomes.

Finally, there is E-discovery. VLA bots can help with the process of identifying, collecting, and analysing electronically stored information (ESI) for litigation purposes.

Some statistics

There are plenty of interesting statistics available when it comes to virtual assistants. Here is a selection.

  • Virtual assistants can decrease operating costs by up to 78%.
  • Investing in virtual assistants cuts the attrition by 50%. (The attrition rate pertains to the number of people resigning from an organization over a period of time).
  • Virtual assistants increase productivity by 13%.
  • According to a survey by the American Bar Association, 26% of lawyers use virtual assistants or paralegals. The 2020 Legal Trends Report found that law firms only spend an average of 2.5 hours each day on billable work, which can be improved by delegating work to legal virtual assistants.
  • A study by the University of Oxford found that 23% of legal work can be automated by existing technology, and that virtual assistants can handle tasks such as document review, contract analysis, due diligence, and research.
  • A report by Deloitte estimated that 39% of legal jobs will be replaced by automation, and that virtual assistants will play a key role in enhancing productivity, efficiency, and accuracy.
  • A survey by LawGeex found that virtual assistants can review contracts faster and more accurately than human lawyers. The average accuracy rate for virtual assistants was 94%, compared to 85% for human lawyers. The average time for virtual assistants to review a contract was 26 seconds, compared to 92 minutes for human lawyers.
  • According to Gartner, by 2023, virtual legal assistants (VLAs) will field 25% of internal requests to legal departments at large enterprises, increasing operational capacity for in-house corporate teams.
  • A survey by Virtudesk found that 82% of business owners who hired virtual assistants reported increased productivity and efficiency, and 78% said they saved money on operational costs.

What are the benefits of using virtual legal assistants?

We listed the tasks virtual legal assistants can do above. By delegating these tasks to a virtual legal assistant, you can free up your time and focus on the core aspects of your practice, such as strategy, advocacy, and client relations. As such, they increase efficiency and productivity.

You can also reduce your overhead costs, as you only pay for the services you need, when you need them. You don’t have to worry about hiring, training, supervising, or providing benefits to an in-house staff member. In other words, they can also be a more cost-effective solution compared to hiring additional staff for administrative tasks. (Cf. the statistics quoted above).

A virtual legal assistant can also offer you flexibility and convenience, as they can work from anywhere and often at any time. You can access their services on demand, without being limited by office hours or location. You can also communicate with them through various channels, such as phone, email, chat, or video conferencing. VLA bots work 24/7.

There also is the access to technology aspect. AI virtual legal assistants can automate repetitive tasks. They can leverage advanced AI and technology and may provide access to powerful tools that may not be affordable or available to smaller law firms.

Virtual legal assistants increase accuracy. Especially AI-driven assistants can often perform tasks with a high level of accuracy and consistency, reducing the likelihood of human errors. (Cf. our article on when lawyers and robots compete).

Scalability is another benefit. Working with VLAs allow you to easily adapt to the changing needs of your law practice, whether it’s handling increased workloads during busy periods or scaling down during quieter times.

What are the limitations?

While virtual legal assistants can be valuable tools, they are not meant to replace human lawyers. Instead, they complement legal professionals by enhancing their productivity and efficiency. It’s essential to consider the specific needs of the law practice and the capabilities of the virtual legal assistant platform before making a choice. They are meant to assist lawyers, not replace them.

Another thing to keep in mind is that at present most of the VLA bots are only available in English.

Conclusion

The use of virtual legal assistants is on the rise, and that should not come as a surprise. They boost efficiency, productivity, are cost-effective, and allow lawyers to focus on legal work.

 

Sources:

Statistics

The dangers of artificial intelligence

Artificial intelligence (AI) is a powerful technology that can bring many benefits to society. However, AI also poses significant risks and challenges that need to be addressed with caution and responsibility. In this article, we explore the questions, “What are the dangers of artificial intelligence?”, and “Does regulation offer a solution?”

The possible dangers of artificial intelligence have been making headlines lately. First, Elon Musk and several experts called for a pause in the development of AI. They were concerned that we could lose control over AI considering how much progress has been made recently. They expressed their worries that AI could pose a genuine risk to society. A second group of experts, however, replied that Musk and his companions were severely overstating the risks involved and labelled them “needlessly alarmist”. But then a third group again warned of the dangers of artificial intelligence. This third group included people like Geoffrey Hinton, who has been called the godfather of AI. They even explicitly stated that AI could lead to the extinction of humankind.

Since those three groups stated their views, many articles have been written about the dangers of AI. And the calls to regulate AI have become louder than ever before. (We published an article on initiatives to regulate AI in October 2022). Several countries have started taking initiatives.

What are the dangers of artificial intelligence?

So, what are the dangers of artificial intelligence? As with any powerful technology, it can be used for nefarious purposes. It can be weaponized and used for criminal purposes. But even the proper use of AI holds inherent risks and can lead to unwanted consequences. Let us have a closer look.

A lot of attention has already been paid in the media to the errors, misinformation, and hallucinations of artificial intelligence. Tools like ChatGPT are programmed to sound convincing, not to be accurate. It gets its information from the Internet, but the Internet contains a lot of information that is not correct. So, its answers will reflect this. Worse, because it is programmed to provide any answer if it can, it sometimes just makes things up. Such instances have been called hallucinations. In a lawsuit in the US, e.g., a lawyer had to admit that the precedents he had quoted did not exist and were fabricated by ChatGPT. (In a previous article on ChatGPT, we warned that any legal feedback it gives must be double-checked).

As soon as ChatGPT became available, cybercriminals started using it to their advantage. A second set of dangers therefore has to do with cybercrime and cybersecurity threats: AI can be exploited by malicious actors to launch sophisticated cyberattacks. This includes using AI algorithms to automate and enhance hacking techniques, identify vulnerabilities, and breach security systems. Phishing attacks have also become more sophisticated and harder to detect.

AI can also be used for cyber espionage and surveillance: AI can be employed for sophisticated cyber espionage activities, including intelligence gathering, surveillance, and intrusion into critical systems. Related to this is the risk of invasion of privacy and data manipulation. AI can collect and analyse massive amounts of personal data from various sources, such as social media, cameras, sensors, and biometrics. This can enable AI to infer sensitive information about people’s identities, preferences, behaviours, and emotions. AI can also use this data to track and monitor people’s movements, activities, and interactions. This can pose threats to human rights, such as freedom of expression, association, and assembly.

Increased usage of AI will also lead to the loss of jobs due to automation. AI can perform many tasks faster and cheaper than humans, which will lead to unemployment and inequality. An article on ZD Net estimates that AI could automate 300 million jobs. Approximately 28% of current jobs could be at risk.

There also is a risk of loss of control. As AI systems become more powerful, there is a risk that we will lose control over them. This could lead to AI systems making decisions that are harmful to humans, such as launching a nuclear attack or starting a war. This risk of the loss of control is a major concern about the weaponization of AI. As AI technology advances, there is a worry that it could be weaponized by state or non-state actors. Autonomous weapon systems equipped with AI could potentially make lethal decisions without human intervention, leading to significant ethical and humanitarian concerns.

We already mentioned errors, misinformation, and hallucinations. Those are involuntary side-effects of AI.  A related danger of AI is the deliberate manipulation and misinformation of society through algorithms. AI can generate realistic and persuasive content, such as deepfakes, fake news, and propaganda, that can influence people’s opinions and behaviours. AI can also exploit people’s psychological biases and preferences to manipulate their choices and actions, such as online shopping, voting, and dating.

Generative AI tends to use existing data as its basis for creating new content. But this can cause issues of infringement of intellectual property rights. (We briefly discussed this in our article on generative AI).

Another risk inherent to the fact that AI learns from large datasets is bias and discrimination. If this data contains biases, then AI can amplify and perpetuate them. This poses a significant danger in areas such as hiring practices, lending decisions, and law enforcement, where biased AI systems can lead to unfair outcomes. And if AI technologies are not accessible or affordable for all, they could exacerbate existing social and economic inequalities.

Related to this are ethical implications. As AI systems become more sophisticated, they may face ethical dilemmas, such as decisions involving human life or the prioritization of certain values. Think, e.g., of self-driving vehicles when an accident cannot be avoided: do you sacrifice the driver if it means saving more lives? It is crucial to establish ethical frameworks and guidelines for the development and deployment of AI technologies. Encouraging interdisciplinary collaboration among experts in technology, ethics, and philosophy can help navigate these complex ethical challenges.

At present, there is insufficient regulation regarding the accountability and transparency of AI. As AI becomes increasingly autonomous, accountability and transparency become essential to address the potential unintended consequences of AI. In a previous article on robot law, we asked the question who is accountable when, e.g., a robot causes an accident. Is it the manufacturer, the owner, or – as AI becomes more and more self-aware – could it be the robot? Similarly, when ChatGPT provides false information, who is liable? In the US, Georgia radio host Mark Walters found that ChatGPT was spreading false information about him, accusing him of embezzling money. So, he is suing OpenAI, the creators of ChatGPT.

As the abovementioned example of the lawyer quoting non-existing precedents illustrated, there also is a risk of dependence and overreliance: Relying too heavily on AI systems without proper understanding or human oversight can lead to errors, system failures, or the loss of critical skills and knowledge.

Finally, there is the matter of superintelligence that several experts warn about. They claim that the development of highly autonomous AI systems with superintelligence surpassing human capabilities poses a potential existential risk. The ability of such systems to rapidly self-improve and make decisions beyond human comprehension raises concerns about control and ethical implications. Managing this risk requires ongoing interdisciplinary research, collaboration, and open dialogue among experts, policymakers, and society at large. On the other hand, one expert said that it is baseless to automatically assume that superintelligent AI will become destructive, just because it could. Still, the EU initiative includes the requirement of building in a compulsory kill switch that allows to switch the AI off at any given moment.

Does regulation offer a solution?

In recent weeks, several countries have announced initiatives to regulate AI. The EU already had its own initiative. At the end of May, its tech chief Margrethe Vestager said she believed a draft voluntary code of conduct for generative AI could be drawn up “within the next weeks”, with a final proposal for industry to sign up “very, very soon”. The US, Australia, and Singapore also have submitted proposals to regulate AI.

Several of the abovementioned dangers can be addressed through regulation. Let us go over some examples.

Regulations for cybercrime and cybersecurity should emphasize strong cybersecurity measures, encryption standards, and continuous monitoring for AI-driven threats.

To counter cyber espionage and surveillance risks, we need robust cybersecurity practices, advanced threat detection tech, and global cooperation to share intelligence and establish norms against cyber espionage.

Privacy and data protection regulations should enforce strict standards, incentivize secure protocols, and impose severe penalties for breaches, safeguarding individuals and businesses from AI-enabled cybercrime.

To prevent the loss of jobs, societies need to invest in education and training for workers to adapt to the changing labour market and create new opportunities for human-AI collaboration.

Addressing AI weaponization requires international cooperation, open discussions, and establishing norms, treaties, or agreements to prevent uncontrolled development and use of AI in military applications.

To combat deepfakes and propaganda, we must develop ethical standards and regulations for AI content creation and dissemination. Additionally, educating people on critical evaluation and information verification is essential.

Addressing bias and discrimination involves ensuring diverse and representative training data, rigorous bias testing, and transparent processes for auditing and correcting AI systems. Ethical guidelines and regulations should promote fairness, accountability, and inclusivity.

When it comes to accountability and transparency, regulatory frameworks can demand that developers and organizations provide clear explanations of how AI systems make decisions. This enables better understanding, identification of potential biases or errors, and the ability to rectify any unintended consequences.

At the same time, regulation also has its limitations. While it is important, e.g., to regulate things like cybercrime or the weaponization of AI, it is also clear that the regulation will not put an end to these practices. After all, by definition, cybercriminals don’t tend to care about any regulations. And despite the fact that several types of weapons of mass destruction have been outlawed, it is also clear that they are still being produced and used by several actors. But regulation does help to keep the trespassers accountable.

It is also difficult to assess how disruptive the impact of AI will be on society. Depending on how disruptive it is, additional measures may be needed.

Conclusion

We have reached a stage where AI has become so advanced that it will change the world and the way we live. This is already creating issues that need to be addressed. And as with any powerful technology, it can be abused. Those risks, too, need to be addressed. But while we must acknowledge these issues, it should also be clear that the benefits outweigh the risks, as long as we don’t get ahead of ourselves. At present, humans abusing AI are a greater danger than AI itself.

 

Sources:

 

Generative AI

In a previous article, we talked about ChatGPT. It is a prime example of generative AI (artificial intelligence). In this article, we will explore generative ai a bit more in detail. We’ll answer questions like, “What is Generative AI?”, “Why is Generative AI important?”, “What can it do?”, “What are the downsides?”, and “What are the Generative AI applications for lawyers?”.

What is Generative AI?

A website dedicated to generative AI defines it as “the part of Artificial Intelligence that can generate all kinds of data, including audio, code, images, text, simulations, 3D objects, videos, and so forth. It takes inspiration from existing data, but also generates new and unexpected outputs, breaking new ground in the world of product design, art, and many more.” (generativeai.net)

The definition Sabrina Ortiz gives on ZDNet is complementary: “All it refers to is AI algorithms that generate or create an output, such as text, photo, video, code, data, and 3D renderings, from data they are trained on. The premise of generative AI is to create content, as opposed to other forms of AI, which might be used for other purposes, such as analysing data or helping to control a self-driving car.” As such, Generative AI is a type of machine learning that is specifically designed to create (generate) content.

Two types of generative AI have been making headlines. There are programs that can create visual art, like Midjourney or DALL-E2. And there are applications like ChatGPT that can generate almost any desired text output and excels in conversation in natural language.

Why is Generative AI important?

Generative AI is still in its early stages and already it can perform impressive tasks. As it grows and becomes more powerful, it will fundamentally change the way we operate and live. Many experts agree it will have an impact that is at least as big as the introduction of the Internet. Just think of how much the Internet has become of our daily lives. Generative AI, too, is expected to become fully integrated into our lives. And it is expected to do so quickly. One expert predicts that on average we will have new and twice as powerful generative AI systems every 18 months. Only four months after ChatGPT 3.5 was released, on 14 March 2023, a new, more powerful, more accurate, and more sophisticated version 4.0 was released. The new version is a first step towards a multimodal generative AI, i.e., one that can work with several media simultaneously: text, graphics, video, audio. It can create output of over 25 000 words of text, which allows it to be more creative and collaborative. And it’s safer and faster.

Let us next have a look at what generative AI can already do, and what it will be able to do soon.

What can it do?

One of the first areas where generative AI was making major breakthroughs was to create visual art. Sabrina Ortiz explains, “Generative AI art is created by AI models that are trained on existing art. The model is trained on billions of images found across the internet. The model uses this data to learn styles of pictures and then uses this insight to generate new art when prompted by an individual through text.” These are five free AI art generators that you can try out for yourself:

We already know from our previous article that ChatGPT can create virtually any text output. It can write emails and other correspondence, papers, a range of legal documents including contracts, programming code, episodes of TV series, etc. It can assist in research, make summaries of text, describe artwork, etc.

More and more search engines are starting to use generative AI as well. Bing, DuckDuckGo, and You.com, e.g., all already have a chat interface. When you ask a question, you get an answer in natural language, instead of a list of URLs. Bing even gives the references that it based its feedback on. Google is expected to launch its own generative AI enabled search engine soon.

More specifically to programming, one of the major platforms for developers (GitHub) announced it now has an AI Copilot for Business which is an AI-powered developer tool that can write code, debug and give feedback on existing code. It can solve any issues it may detect in the code.

Google’s MusicLM already can write music upon request, and the new ChatGPT version 4 announced a similar offering, too. YouTube also has announced that it will start offering generative AI assistance for video creation.

Generative AI tools can be useful writing assistants. The article on g2.com, mentioned in the sources, lists 48 free writing assistants, though many of them use a freemium model. Writer’s block may soon be a thing of the past, as several of these writing assistants only need a key word to start producing a first draft. You even get to choose the writing style.

Generative AI can also accelerate scientific research and increase our knowledge. It can, e.g., lower healthcare costs and speed up drug development.

In Britain, a nightclub successfully organized a dance event where the DJ was an AI bot.

All existing chatbots can get an upgrade where they will become far better at natural language conversations. And generative AI integrated with the right customer processes will improve customer experience.

As you can see, even though we’re only at the beginning of the generative AI revolution, the possibilities are endless.

What are the downsides?

At present, generative AI tools are mostly tools that assist. The output needs to be supervised. Sometimes, ChatGPT, e.g., gives incorrect answers. Worse, it can just make things up, and an experiment with a legal chatbot discovered that the bot just started lying because it had concluded that that was the most effective way to get the desired end result. So, there are no guarantees that the produced output is correct. And the AI system does not care whether what it does is morally or legally acceptable. Extra safeguards will have to be built in, which is why there are several calls to regulate AI.

There also is an ongoing debate about intellectual property rights. If a program takes an existing image and merely applies one or more filters, does this infringe on the intellectual property of the original artist? Where do you draw the line? And who owns the copyright on what generative AI creates? If, e.g., a pharmaceutical company uses an AI tool to create a new drug, who can take a patent? Is it the pharmaceutical company, the company that created the AI tool, or the AI tool itself?

And as generative AI becomes better, it will transform the knowledge and creative marketplaces, which will inevitably lead to the loss of jobs.

Generative AI applications for lawyers

As a result of the quick progress in generative AI, existing legal chatbots are already being upgraded. A first improvement has to do with user convenience and user-friendliness because users can now interact with the bots through a natural language interface. The new generation of bots understand more and are also expected to become faster, safer, and more accurate. The new ChatGPT 4 scored in the 90th percentile for the bar exams, where ChatGPT 3 – only a few months earlier – barely passed some exams.

Virtual Legal Assistants (VLA) are getting more and more effective in:

  • Legal research
  • Drafting, reviewing, and summarizing legal documents: contracts, demand letters, discovery demands, nondisclosure agreements, employment agreements, etc.
  • Correspondence
  • Creative collaboration
  • Brainstorming, etc.

As mentioned before, at present these AI assistants are just that, i.e., assistants. They can create draft versions of legal documents, but those still need revision by an actual human lawyer. These VLAs still make errors. But at the same time, they can already considerably enhance productivity by saving you a lot of time. And they are getting better and better fast, as the example of the bar exams confirms.

 

Sources:

 

Legal Technology Predictions for 2023

Towards the end of every calendar year, the American Bar Association publishes the results of its annual legal technology survey. Several legal service providers, experts, and reporters, too, analyse existing trends and subsequently make their own legal technology predictions for 2023. Some items stand out that most pay attention to. In this article, we will look at automation, artificial intelligence, cloud-native solutions, virtual legal assistants, data privacy and cybersecurity, crypto technologies, blockchain, and smart contracts. We will briefly pay attention to some other trends, as well.

Automation

Automation keeps being a major driver of change in many industries. The legal sector is no exception, even though it lags compared to many other sectors. Lawyers seem to take longer to catch up that automation is beneficial. It is making many processes in the legal industry faster, more efficient, and less expensive. Automation has proven to be successful in fields like legal research, e-discovery and document review and management. In 2023, we can expect to see this trend continue, with a renewed focus on automating the law firm administration and on the creation and review of legal documents. Automated workflows can be used to streamline legal processes, such as litigation support, e-discovery, and case management. Automation can also assist in organizing and tracking progress and regulatory changes, data collection, reporting, and communication. An increase in automation will help to improve the accuracy of legal processes, reducing the risk of errors, and increasing efficiency.

Artificial Intelligence

Artificial Intelligence is becoming ubiquitous. In many aspects of our lives, there now are AI solutions available that make life easier. In the legal sector, too, AI is starting to make waves. In all the above-mentioned examples of automation, AI is playing a crucial role. As mentioned above, AI has already been successfully assisting lawyers with legal research, with process and workflow automation, with the generation of legal documents, as well as with e-discovery. But those are still fairly simple applications of AI. It can do far more. These days, AI is also being used to digest vast volumes of text and voice conversations, identify patterns, or carry out impressive feats of predictive modelling. The virtual legal assistants that we’ll discuss below, too, are all AI applications. If properly used, AI can save law firms much time and money. In 2023, we can expect to see a more widespread adoption of AI in the legal sector. (More on Artificial Intelligence and the Law).

Cloud-Native Solutions

Cloud computing has been a game-changer for many industries. Previous reports had already revealed that lawyers, too, are more and more relying on cloud solutions. This should not come as a surprise, as Cloud-based solutions provide many benefits, including reduced costs, increased scalability, and improved data security. They help lawyers and clients share files and data across disparate platforms rather than relying solely on emails. Additionally, cloud-based solutions are more accessible, allowing legal firms to work from anywhere and collaborate more effectively with clients and other stakeholders. In 2023, we can expect this trend to continue. (In the past, we have published articles on cloud solutions for lawyers, on managing your law firm in the cloud, an on lawyers in the cloud).

Virtual Legal Assistants (VLAs)

In the past, we have talked on several occasions about legal chatbots. Chatbots have sufficiently matured to now start playing the role of virtual legal assistants. VLAs are AI-powered chatbots that build on basic neural network computing models to harness the power of deep learning. They use artificial intelligence algorithms to assist law firms with various tasks. Gartner predicts VLAs can answer one-quarter of internal requests made to legal departments. They extend the operational capacity of law firms as well as of in-house corporate legal teams. As a result, they assist in reducing lawyers’ average response time and producing distinct service delivery efficiencies. Furthermore, as VLAs are a form of automation, all the benefits of automation apply here too: virtual legal assistants can help to improve the accuracy of legal work, reduce the risk of errors and increase efficiency. At present, virtual legal assistants are still primarily being used in uncomplicated and repetitive operations. Recent breakthroughs, however, indicate that they are already able to take on more complex tasks and will continue to do so.

Data Privacy and Cybersecurity

Ever since the GDPR, data privacy and cybersecurity have become increasingly important. In 2023, we can expect to see an ongoing emphasis on data privacy and as well as an increase in attention to cybersecurity in the legal sector. (The examples of high-profile Big Tech corporations receiving massive fines seem to be a good incentive). Law firms have understood that they too need to make sure that they have robust data privacy and cybersecurity measures in place to protect their clients’ confidential information. Several law firms also provide their clients assistance with the legal aspects of data protection.

Crypto technologies, Blockchain, and smart contracts

The market of cryptocurrencies was volatile in 2022. That did not stop an increase in interest in the underlying crypto technologies. Experts predict rises in a) regulation of cryptocurrencies and crypto technologies, in b) the adoption of cryptocurrency, c) a growing interest in decentralized finance (DeFi), and d) an increase in attempts at cryptocurrency taxation. We are already witnessing an intensification in litigation with regard to cryptocurrency and crypto technologies. This trend is expected to continue. Litigation about NFTs, e.g., is one of the areas where litigation is expected to rapidly increase.

Experts also expect an ongoing interest in and an increased adoption of Blockchain technology. Blockchain can be used to securely store and manage legal data, reducing the risk of data breaches and ensuring the integrity of legal records. Additionally, blockchain can be used to automate many legal processes, such as contract management and dispute resolution, by enabling the creation of smart contracts. As we mentioned in previous articles, smart contracts can streamline many legal processes, reducing the time and cost associated with contract management and dispute resolution. They can also help to increase the transparency and accountability of legal transactions, reducing the risk of fraud and improving the overall efficiency of legal processes.

Other Trends

The ABA survey report noticed that law firms are spending more money on legal technology than ever before. In many cases, this involved investing more in tightening cybersecurity.

The trend to work remotely and to use video conferencing for virtual meetings that started during the pandemic is ongoing.

More than ever before lawyers pay attention to their own work experience, as well as to the user experience for their clients by making their law firms more client centred. There is an ongoing focus on work-life balance, not only for the lawyers but also for the employees of law firms. Law firms are finally starting to consider things like employee satisfaction.

While billable hours remain the most used fee model, there has been a noticeable increase in lawyers using a subscription fee model.

Finally, the trend that law firms are increasingly hiring people with hybrid profiles is continuing. By increasing cognitive diversity, law firms want to close the gap between professionals with knowledge of legal matters and those with enough legal tech expertise to manage the digitization and automation of workflows. Gartner predicts that by the end of 2023, one third of corporate legal departments will have a legal tech expert in charge of managing the digital transformation and automation of internal processes. Large law firms are also increasingly hiring lawyers that are familiar with business administration.

 

Sources:

ChatGPT for Lawyers

In this article we will first talk about recent evolutions in the field of generative Artificial Intelligence (AI) in general, and about a new generation of chat bots. Then we focus on one particular one that is getting a lot of attention, i.e., ChatGPT. What is ChatGPT? What can it do, and what are the limits? Finally, we look at the relevance of ChatGPT for lawyers.

Introduction

We are witnessing the emergence of a new generation of chat bots that are more powerful than ever before. (We discussed legal chat bots before, here and here). Several of them excel in conversation. Some of these conversationalist chat bots recently made headlines on several occasions. In a first example, in December 2022, the DoNotPay chat bot renegotiated a customer’s contract with Comcast’s chat bot and managed to save 120 USD per year. (You read that correctly, two bots effectively renegotiating a contract). Shortly afterwards, a computer using a cloned voice of a customer was connected to the DoNotPay chat bot. A call was made to the support desk of a company and the speaking chat bot negotiated successfully with a live person for a reduction of a commercial penalty. The search engine You.com has added a conversation chat bot that allows people to ask a question and the reply is presented in a conversational format rather than a list of links. Microsoft has announced that its Bing search engine will start offering a conversational interface as well.

Conversationalist chat bots are a form of generative AI. Generative AI has made tremendous progress in other fields like the creation of digital artwork, or in filters and effects for all kinds of digital media, or in the generation of documents. These can be any documents: legal documents, blog or magazine articles, papers, programming code… Only days ago, the C-NET technology website revealed that they had started publishing articles since November 2022 that were entirely written by generative AI. Over a period of two months, they published 74 articles that were written by a bot, and the readers did not notice.

One chat bot in particular has been in the news on a nearly daily basis since it was launched in November 2022. Its name is ChatGPT and the underlying technology has also been used in some of the examples mentioned above.

What is ChatGPT?

ChatGPT stands for Chat Generative Pre-trained Transformer. The Wikipedia describes it as “a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge.”

In other words, it’s a very advanced chat bot that can carry a conversation. It remembers previous questions you asked and the answers it gave. Because it was trained on a large-scale database of texts, retrieved from the Internet, it can converse on a wide variety of topics. And because it was trained on natural language models, it is quite articulate.

What can it do and what are the limits?

Its primary use probably is as a knowledge search engine. You can ask a question just like you ask a question in any search engine. But the feedback it gives does not consist of a series of links. Instead, it consults what it has scanned beforehand and provides you with a summary text containing the reply to the question you asked.

But it doesn’t stop there, as the examples we have already mentioned illustrate. You can ask it to write a paper or an article on a chosen topic. You can determine the tone and style of the output. Lecturers have used it to prepare lectures. Many users asked it to write poetry on topics of their choice. They could even ask it to write sonnets or limericks, and it obliged. And most of the time, with impressive results. It succeeds wonderfully well in carrying a philosophical discussion. Programmers have asked it to write program code, etc. It does a great job of describing existing artwork. In short, if the desired output is text-based, chances are ChatGPT can deliver. As one reporter remarked, the possibilities are endless.

There are of course limitations. If the data sets it learned from contained errors, false information, or biases, the system will inherit those. A reporter who asked ChatGPT to write a product review commented on how the writing style and the structure of the article were very professional, but that the content was largely wrong. Many of the specifications it gave were from the predecessor of the product it was asked to review. In other words, a review by a person who has the required knowledge is still needed.

Sometimes, it does not understand the question, and it needs to be rephrased. On the other hand, sometimes the answers are excessively verbose with little valuable content. (I guess that dataset contained speeches by politicians). There still are plenty of topics that it has no reliable knowledge of. When you ask it if it can give you some legal advice, it will tell you it is not qualified to do so. (But if you rephrase the question, you may get an answer anyway, and it may or may not be accurate). Some of the programming code appeared to be copied from sites used by developers, which would constitute a copyright infringement. And much of the suggested programming code turned out to be insufficiently secure. For those reasons, several sites like StackOverflow are banning replies that are generated by ChatGPT.

Several other concerns were also voiced. As the example of CNET shows, these new generative AI bots have the potential of eliminating the need for a human writer. ChatGPT can also write an entire full essay within seconds, making it easier for students to cheat or to avoid learning how to write properly. Another concern is the possible spread of misinformation. If you know enough sources of the dataset that the chatbot learns from, you could deliberately flood it with false information.

What is the Relevance of ChatGPT for Lawyers?

Lawyers have been using generative AI for a while. It has proven to be successful in drafting and reviewing contracts and other legal documents. Bots like DoNotPay, Lawdroid, and HelloDivorce are successfully assisting in legal matters on a daily basis. For these existing legal bots, ChatGPT can provide a user-friendly conversationalist interface that make them easier to use.

When it comes to ChatGPT itself, several lawyers have reported on their experiences and tests with the system. It turned out that it could mimic the work of lawyers with varying degrees of success. For some items, it did a great job. It, e.g., successfully wrote a draft renting agreement. And it did a good job at comparing different versions of a legal document and highlighting what the differences were. But in other tests, the information it provided was inaccurate or plain wrong, where it, e.g., confused different concepts.

And the concerns that apply to generative AI in general, also apply to ChatGPT. These include concerns about bias and discrimination, privacy and compliance with existing privacy and data protection regulation like the GDPR, fake news and misleading content. For ChatGPT, the issue of intellectual property rights was raised as well. The organization behind ChatGPT claims it never copies texts verbatim, but tests with programming code appear to show differently. (You can’t really paraphrase programming code).

Given the success and interest in ChatGPT, the usual question was raised whether AI will replace the need for lawyers. And the answer stays the same that, no, it won’t. At present, the results are often very impressive, but they are not reliable enough. Still, the progress that has been made shows that it will get better and better at performing some of the tasks that lawyers do. It is good at gathering information, at summarizing it and at comparing texts. And only days ago (13 January 2023) the American Bar Association announced that ChatGPT had successfully passed one of its bar exams on evidence. But lawyers are still needed when it comes to critical thinking or the elaborate application of legal principles.

Conclusion

A new generation of chat bots is showing us the future. Even though tremendous progress has been made, there are still many scenarios where they’re not perfect. Still, they are improving every single day. And while at present supervision is still needed to check the results, they can offer valuable assistance. As one lecturer put it, instead of spending a whole day preparing a lecture, he lets ChatGPT do the preparation for him and write a first draft. He then only needs one hour to review and correct it.

For lawyers, too, the same applies. The legal texts it generates can be a hit and miss, and supervision is needed. You could think of the current status where the chat bot is like a first- or second-year law student doing an internship. They can save you time, but you have to review what they’re doing and correct where necessary. Tom Martin from Lawdroid puts it as follows: “If lawyers frame Generative AI as a push button solution, then it will likely be deemed a failure because some shortcoming can be found with the output from someone’s point of view. On the other hand, if success is defined as productive collaboration, then expectations may be better aligned with Generative AI’s strengths.”

 

Sources: