Tag Archives: artificial intelligence

AI Agents are the next big thing

In our previous article, we looked at legal technology predictions for 2025. Several experts predicted that AI agents would be the most important evolution. So, let’s have a closer look. In this article, we will answer the following questions, “what are AI agents? “, and “why are they important?”. We will also talk about AI agents in legal technology.

What are AI Agents?

An artificial intelligence (AI) agent is a software program that can autonomously interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. So, it is a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. AI agents may improve their performance with learning or acquiring knowledge.

IBM explains that “AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions. These agents can be deployed in various applications to solve complex tasks in various enterprise contexts from software design and IT automation to code-generation tools and conversational assistants. They use the advanced natural language processing techniques of large language models (LLMs) to comprehend and respond to user inputs step-by-step and determine when to call on external tools.”

Why are they important?

Some refer to agentic AI as the third wave of the AI revolution. The first wave was predictive analytics where AI could crunch large datasets to discover patterns and make predictions. The second wave was generative AI, that uses deep learning and large language models (LLM) that can perform natural language processing tasks. And now, the third wave consists of AI agents that can autonomously handle complex tasks.

Because they can autonomously handle complex tasks, and better than ever before, AI agents can change the way we work. One headline gives the example of an AI agent that can reduce programming from months to days. There already are E-commerce agents, sales and marketing agents, customer support agents, hospitality agents, as well as dynamic pricing systems, content recommendation systems, autonomous vehicles, and manufacturing robots, for example. And they all can do the work that was previously done by humans.

AI agents clearly offer several benefits. They can dramatically improve productivity, as they can handle complex tasks without human supervision or intervention. And because processes are automated, this also reduces the costs. AI agents can also be used to do research which in turn allows to make informed decisions. AI agents also lead to an improved customer experience because they can “personalize product recommendations, provide prompt responses, and innovate to improve customer engagement, conversion, and loyalty.”

But, as with any breakthroughs in AI, it is important the remain aware that there always is a dark side, too. Already there are warnings about ransomware AI agents, which work autonomously, and are far more sophisticated than their predecessors.

AI Agents in legal technology

For quite a while now, legal technology has been using bots that automate certain processes. In a way, AI agents are the next generation of bots. Many legal technology experts predicted that 2025 would be the year of the legal AI agents.

A selection of predictions on AI agents in legal technology

The National Law Review, also quoted in last month’s article, interviewed more than sixty experts on legal technology. Several of them talked about AI agents in legal technology. Here is a selection of quotes.

Gabe Teninbaum stated that “The biggest surprise in legal AI in 2025 will be the emergence of agentic AI—systems capable of taking autonomous, goal-driven actions within set parameters. These tools won’t just assist lawyers but will independently draft contracts, conduct negotiations, and even manage compliance, pushing the profession to redefine what it means to “practice law.”” And “by 2025, legal AI will shift from supporting tools to decision-making partners, with agentic systems managing tasks like compliance monitoring and preliminary dispute resolution. The surprise won’t be AI’s capability—it will be the speed at which clients demand its adoption.”

Nicola Shaver said, “Agentic AI, with the capability to automate legal workflows end-to-end, will become more prevalent in 2025, as will AI-enabled workflows generally. We will see a move away from the chatbot model to generative AI that is built into the systems where lawyers work and that mimics the way lawyers work, making it easier to adopt. Lawyers should expect to access custom apps for their legal practice areas in places like their document management or practice management systems and will adopt the tools that they like at a deeper level. In 2025, some lawyers will be using generative AI on a daily basis without even noticing it, since it will be an enabler of so many systems in the back end with less of the prompting burden sitting with end users.”

Tom Martin echoes a similar sentiment, calling Agentic AI “a transformative leap in the direct provision of legal services, driven by strengthening multimodal AI models, agentic capabilities, seamless machine-level orchestration, and evolving regulations governing AI-driven legal entities. This shift won’t just streamline existing workflows; it will redefine the way legal services are conceived, delivered, and experienced.”

Jon M. Garon observes that, “The potential for user-operated agents will grow exponentially as these apps create the power to automate calendaring, meeting coordination, note-taking, work-out buddies, and much more, becoming true personal assistants. Lawyers will need to be careful that the agents do not disclose personal or client data, but with that problem solved, these will grow into a significant new market. ”

Evan Shenkman explains it as follows: “Think about tools that can listen in on depositions, trials, or client intake meetings, and provide the attorney — in real-time — with AI-powered guidance and assistance (issue spotting, identifying inconsistencies or falsehoods, etc.) based on the tool’s prior review and analysis of the entire case file. Or tools that can continually review the case docket, and then unilaterally alert the attorney of what just happened, what now needs to be done, and include GenAI-created proposed drafts based on prior firm samples. These tools are already in the works and will be mainstream soon enough. ”

Benefits of AI Agents in legal technology

The benefits AI Agents will bring to the field of legal technology apply not only to lawyers, but to all legal service providers, including alternative legal service providers.

One of the obvious primary advantages of AI agents in the legal field is their ability to enhance efficiency and reduce costs. Bots have already been doing that to a certain extent by automating repetitive tasks such as document review, legal research, and contract analysis. AI agents are expected to take this process of automating tasks to a new level where entire workflows and more complex tasks will be handled by them as well. This will free up valuable time for attorneys to focus on more complex and strategic aspects of their work. This not only increases productivity but also reduces the likelihood of human error, leading to more accurate outcomes.

The capability to process and analyse large volumes of data at speeds is particularly beneficial in legal research: AI can quickly sift through case law, statutes, and regulations to provide relevant information and insights.

Another significant benefit is the improved client service. By providing real-time updates and centralized document management, these agents encourage better collaboration within legal teams. This leads to more cohesive workflows and ensures that all team members are informed and aligned. All of this contributes to enhancing the client experience. (Several experts, some of whom are quoted above, predict that client demand will be a major factor in the adoption of AI agents).

AI agents also support transfer learning, which enables them to apply knowledge gained in one context to new, related tasks. This reduces the need for extensive retraining and allows legal professionals to leverage AI capabilities across various areas of law.

 

Sources:

Legal technology predictions for 2025

At the end of the year and the beginning of a new one, many publications give their predictions for the new year. In this article, we will go over a selection of legal technology predictions for 2025. We can group them in four categories: legal technology predictions that do not involve AI, predictions on legal issues involving AI, predictions on AI in legal services, and finally, some other legal technology predictions on AI.

Legal technology predictions that do not involve AI

While most of the authors focus on the growing impact of AI, there also are legal technology predictions that do not involve it.

A first set of predictions has to do with client demands. Authors anticipate a significant further proliferation of blockchain, cryptocurrencies, and smart contracts. This will result in a growing demand for lawyers who are versed in these matters. Experts also predict that clients’ expectations will keep on rising, and that law firms will have to adapt to that demand. Already, the legal industry is witnessing a shift towards more client-centric services. Overall, experts also predict a growing demand for legal services for SMBs.

A second set of predictions has to do with the investments law firms will be making. Experts predict an overall increase in investments in technology, and more specifically, apart from AI, increases in spending on knowledge management and on cybersecurity.

Cybersecurity remains a critical concern for law firms, especially with the growing reliance on digital tools and AI. The sector is expected to invest more in cyber resilience strategies to counter potential threats, ensuring the protection of sensitive legal data and maintaining client trust. General counsels and Chief Legal Officer need to up their game when it comes to cybersecurity.

Finally, experts expect the billable hour to further decline, and fixed fees and subscription billing to increase.

Predictions on legal issues involving AI

Several authors also focus on legal issues involving AI. On the one hand, there is the topic of regulating AI, and on the other hand, there is the topic of litigation.

Both the EU and the Council of Europe (CoE) published their frameworks on regulating AI. Unlike the EU AI Act, the Council of Europe’s Treaty is open to all countries who want to sign up. More sign-ups are expected. When it comes to the US, the situation is unclear, as the incoming Trump administration may withdraw from the CoE Treaty. Most experts do not expect the Trump administration to impose its own framework. Several authors do see initiatives on both a state level and on the level of local bar associations. The latter may impose ethical rules regarding the use of AI in law firms, especially when it comes to lawyers using generative AI.

There also is an anticipated increase in litigation related to AI tools and practices. One of the areas where experts predict more litigation involves the disputes over unauthorized use of copyrighted materials for AI training. They also expect an increase of product liability lawsuits involving AI-systems. And an increase in litigation is also anticipated when it comes to AI-induced biases in processes like job screening, and potential antitrust violations stemming from AI-driven pricing tools.

Predictions on AI in legal services

Most of the predictions, however, focus on how Artificial Intelligence will impact the delivery of legal services. And the topic that is most talked about is the introduction of AI agents in the delivery of legal service. Some call it the most important evolution for 2025.

So, what are we talking about? An AI agent is a software program designed to operate independently, perceiving its environment, analysing information, and taking actions to achieve specific goals. It gathers data through sensors or input systems, processes this data using logic or machine learning models, and performs tasks or interacts with its surroundings based on its objectives. These agents are widely used in applications such as virtual assistants, self-driving cars, and automated decision-making systems, allowing them to function without constant human intervention. So, you can think of them as the next generation, more advanced and more versatile bots. And in 2025, they’re expected to have a huge impact on the delivery of legal services and on the way that law firms and legal departments operate. We will discuss AI agents more in depth in a follow-up article.

AI is also become more integrated in all aspects of the delivery of legal services, from optimizing and automating workflows, enhancing knowledge management, and handling specific tasks autonomously. Most experts anticipate that all cloud-based software for lawyers and law firms will be integrating more AI into their systems. Overall, authors also predict that generative AI will become better and more specialized in specific legal areas.

Several authors talk about how artificial intelligence is already leading to a sharp increase in productizing legal services. This applies to law firms, legal departments, but also to alternative legal service providers. Some expect hybrid lawyers and/or self-service legal platforms to become as ubiquitous as online banking. Some even anticipate that more and more lawyers will start collaborating with robot lawyers. And for the first time, some even predict that within 5 years, the combination of the advances in AI and breakthroughs in quantum computing will start replacing entry level lawyers.

Other legal technology predictions on AI

Some experts also made some other legal technology predictions on AI. They are optimistic that Generative AI will improve access to justice, and that we will see courts who will start using Generative AI, as well to become more effective.  They also expect a consolidation movement in the market of legal technology service providers. Finally, some expect that Legal AI and Generative AI will become part of law school curriculum.

 

Sources:

 

Recent Artificial Intelligence Regulations (2024)

In the past, we have discussed the need for artificial intelligence regulations. The first important initiative was the OECD establishing a series of non-binding guidelines in 2019. Another milestone was the EU AI Act of March 2024. Also in 2024, there were some other important regulation frameworks that were introduced. In this article, we will have a look at the Council of Europe Framework Convention on AI, at the United Nations’ AI Resolution, as well as at other artificial intelligence regulations and initiatives, including The Responsible AI in the Military Domain (REAIM) summit in Seoul.

Council of Europe Framework Convention on AI

The full name of the Council of Europe’s Framework Convention on AI is the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. It is the first legally binding international treaty that specifically focuses on regulating artificial intelligence (AI) in line with fundamental rights and values. These are meant to ensure that AI systems respect human rights, support democratic principles, and adhere to the rule of law.

The convention is an initiative of the Council of Europe, which is an international organization that was founded in 1949. Its goals are similar to the UN’s Declaration of Human Rights. It has 46 member states and focuses on promoting human rights, democracy, and the rule of law in Europe.

The history of the framework convention on AI began in 2020 when the Council recognized the need for a legal framework for AI. In 2021, they launched discussions among member states and experts. The aim was to draft a convention that would safeguard fundamental rights while fostering innovation. In 2023, the Council presented a draft of the convention. The framework was officially adopted on 17 May 2024, and was opened for signatures from 5 September 2024 to countries both within and outside Europe, making it a globally significant agreement. Notable is that apart from the 46 Council of Europe member states, another 11 countries – including the US – have signed it as well. More may follow.

Much like the EU AI act, the convention introduces a risk-based approach, addressing the design, deployment, and decommissioning of AI systems. It emphasizes transparency, accountability, and fairness while encouraging responsible innovation. High-risk AI applications, such as those with the potential to harm human rights, are subject to strict oversight. The treaty also allows flexibility for private actors to comply through alternative methods and includes exemptions for research and national security purposes.

This framework is crucial as it provides a common international standard for managing AI’s potential benefits and risks. It promotes trust in AI technologies by ensuring safeguards against misuse and unintended consequences while fostering innovation. The convention aligns closely with the European Union’s AI Act, reinforcing a shared commitment to ethical AI governance on a global scale.

The treaty is also important because of its ability to shape how AI is integrated into societies, balancing innovation with protecting democratic values. It seeks to protect individuals’ rights. AI systems can make decisions that affect people’s lives, such as in job recruitment or law enforcement. The convention safeguards that these systems are fair and transparent is crucial. The convention also promotes accountability. It requires AI developers and users to take responsibility for their systems. This helps build trust between the public and technology. Furthermore, the convention supports democracy. It emphasizes the need for public participation in discussions about AI. This ensures that diverse voices are heard in shaping policies. Finally, it sets a precedent and standard for other countries. If Europe leads in AI regulation, other regions may follow. This can create a global framework for responsible AI use.

The United Nations’ AI Resolution

On March 21, 2024, the United Nations General Assembly adopted its first-ever and non-binding resolution on artificial intelligence (AI). This resolution promotes the development of “safe, secure, and trustworthy” AI systems. It is another significant step in creating global norms for managing AI, which also aims to ensure the technology benefits humanity while addressing its risks. The resolution was led by the United States and co-sponsored by 123 countries, receiving unanimous support from all 193 UN member states.

Here, too, the history of this resolution traces back to the rapid growth of AI technology. As AI started to impact various sectors, concerns about its effects on society grew. We are talking about issues like privacy, bias, and the potential for misuse, which all became prominent. In response, the UN began discussions about how to address these challenges. In 2023, member states began drafting the resolution. After extensive negotiations, the resolution was adopted in March 2024.

The resolution recognizes the transformative potential of AI in addressing global challenges, such as achieving the United Nations’ Sustainable Development Goals. It encourages international cooperation to bridge digital divides, especially between developed and developing countries. One of its goals is to ensure equitable access to AI technologies. Member states are urged to regulate AI systems to protect human rights and privacy, avoid risks, and promote innovation.

Like other regulatory initiatives, this one also underscores the need for global collaboration in governing AI. There is a growing consensus that international regulation is critical to harnessing its benefits responsibly. The resolution aligns with similar efforts, like the European Union’s AI Act and the Council of Europe’s Framework Convention. It emphasizes the importance of ethical, human-centric AI development. It aims to prevent harm while promoting trust in AI systems globally.

This resolution’s importance lies in its acknowledgment of AI’s dual potential: as a tool for progress and a source of risks if left unchecked. It, too, provides a foundation for international frameworks to guide AI use in a way that supports sustainable development and safeguards fundamental rights.

Other artificial intelligence regulations and initiatives

The Global AI Safety Summit

The Global AI Safety Summit is a recurring international conference that discusses the safety and regulation of artificial intelligence (AI). The first Global AI Safety Summit was held on November 1–2, 2023 at Bletchley Park in Milton Keynes, United Kingdom. The summit’s goals included:

  • Developing a shared understanding of the risks of frontier AI
  • Establishing areas for collaboration on AI safety research
  • Launching the UK Artificial Intelligence Safety Institute
  • Testing frontier AI systems against potential harms

The summit was concluded with the Bletchley Declaration, which was signed by 28 nations.

The second Global AI Safety Summit in September 2024 was co-hosted by Britain, South Korea, and others. (See below: The Responsible AI in the Military Domain summit). The third Global AI Safety summit will be held in February 2025 in France.

The Global AI summit brings together international governments, leading AI companies, civil society groups, and experts in research. The summit aims to a) consider the risks of AI, especially at the frontier of development, b) discuss how to mitigate those risks through internationally coordinated action, and c) to understand and mitigate risks of emerging AI while seizing opportunities. The overall goal is the prevention and mitigation of harms from AI, which could be deliberate or accidental. These harms could be physical, psychological, or economic.

The Responsible AI in the Military Domain (REAIM) summit in Seoul

The Responsible AI in the Military Domain (REAIM) summit was held in Seoul on 10 September 2024. About 60 countries including the United States endorsed a “blueprint for action” to govern responsible use of artificial intelligence (AI) in the military. Important is that China did not endorse this blueprint.

The summit was a follow-up to one held in The Hague in 2023 where countries agreed upon a call to action on the topic, that would not be legally binding.

The US AI Safety Summit

A separate global AI safety summit was planned by the Biden administration in the US. The idea is similar: it wants to bring the leading stakeholders together to identify key issues, and to suggest ideas for a regulation framework. President-elect Trump, however, had indicated that he would undo any such framework, so the plans were put on hold.

 

Sources:

 

An introduction to AI computers for lawyers

AI Computers are being called the biggest development in the PC industry in 25 years. Experts believe they could also trigger a refresh cycle in the PC-industry. In this article, we will answer the following questions. What are AI computers? What are the benefits of AI computers, and what are the benefits for lawyers? Do you, as a lawyer need to get yourself one? What are the challenges and limitations for legal work?

What are AI computers?

So, what are AI computers? The term was launched by Intel. They describe it as follows: an AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. The GPU and CPU can also process these workloads, but the NPU is especially good at low-power AI calculations. The AI PC represents a fundamental shift in how our computers operate. It is not a solution for a problem that didn’t exist before. Instead, it promises to be a huge improvement for everyday PC usages.

In other words, AI PCs are regular personal computers that are supercharged with specialized hardware and software. These are specifically designed to handle tasks involving artificial intelligence and machine learning. When it comes to the hardware, what stands out is the presence of an NPU, i.e., a Neural Processing Unit. Its job is to accelerate AI workloads, particularly those that require real-time processing, like voice recognition, image processing, and deep learning applications.

AI PCs also run specialized software stacks, frameworks, and libraries tailored for Artificial intelligence and Machine Learning workloads. “The distinction between AI software and ‘normal’ software lies in how each type of application processes the work you ask it to do. A conventional application just provides pre-defined tools not unlike the specialty tools in a toolbox: you must learn the best way to use those tools, and you need personal experience in using them to be effective on the project at hand. It’s all up to you, every step of the way. In contrast, AI software can learn, make decisions, and tackle complex creative tasks in the same way a human might. That learning capability gives you a new kind of tool that can simply do the job for you at your request, because it has been trained to do so. This fundamental difference enables AI software to automate complex tasks, offer personalized experiences, and process vast amounts of data efficiently, transforming how we interact with our computers.”

Benefits of AI computers

Why were AI computers created in the first place? Generative AI has become extremely popular. But it puts high workloads on the cloud servers AI is using. The idea is to share that workload with the PCs of the users. And for that, you need to have powerful PCs with the necessary hardware and software. In short, AI computers are beneficial for the users, as well as for the manufacturers and AI service providers.

Benefits for users

Experts have identified many potential benefits for the users. AI PCs can boost productivity, enhance creativity, and improve user experience. Below are some of the key advantages the literature mentions, in random order.

Enhanced and accelerated performance for AI Tasks: AI PCs are equipped with hardware specifically designed to tackle demanding AI applications. This translates to faster processing of complex calculations and data analysis, crucial for tasks like video editing, scientific simulations, and training AI models. This acceleration can significantly speed up the training and inference of deep learning models. And other applications like video conferencing, e.g., can also greatly benefit from this enhanced performance.

Improved efficiency and automation: AI features can automate repetitive tasks, freeing you up for more strategic work. Imagine software that automatically categorizes your files or optimizes battery life based on usage patterns.

Improved power efficiency: AI accelerators like TPUs are designed to be power-efficient, consuming less energy while delivering high performance. Laptop batteries, e.g., will last longer before needing recharging. AI PCs can lead to lower operating costs and a smaller environmental footprint.

Personalized User Experience: AI can learn your preferences and adjust settings accordingly. Brightness, keyboard responsiveness, and even video call framing could adapt to your needs, creating a more comfortable and efficient work environment.

Boosted Creativity: some AI PCs come with built-in creative tools that can generate ideas, translate languages, or even write different creative text formats based on your prompts. This can be a game-changer for designers, writers, and anyone looking for a spark of inspiration.

Enhanced Security: AI-powered security features can constantly monitor for threats and potential breaches, offering an extra layer of protection for your data.

Benefits for chip manufacturers and for service providers

The new AI computers do not only benefit the users. As mentioned before, having part of the workload done on the users’ side, also considerably reduces the workload on the servers of the AI service providers. One expert even estimates that, “By end of 2025, 75% of enterprise-managed data will be processed outside the data centre.” So, service provides will have to invest less in infrastructure.

At the same time, AI PCs can be useful in the data centre, too. Two important benefits they offer are scalability and a faster time-to-market. Many AI PCs support multiple AI accelerators, allowing for scaling up the computational power by adding more accelerators as needed. This scalability enables handling larger and more complex AI models and workloads. The accelerated performance of AI PCs can also significantly reduce the time required for training AI models, enabling faster iteration and deployment cycles for AI applications and solutions.

The introduction of a new type of personal computers is of course also good news for the manufacturers, as it creates a new – and booming – market. It should not come as a surprise then, that all major chip manufacturers like Intel, Nvidia, AMD, and Qualcomm have started making NPU chips. Apple, too, has announced new chips that are AI optimized. It is safe to assume that soon all new PCs, laptops, and tablets will be AI computers.

Benefits for lawyers

All of this then begs the questions, do you, as a lawyer, need one? Well, apart from the abovementioned benefits, AI computers can offer lawyers specific benefits, too. They can, e.g., significantly enhance the efficiency of legal practices by automating routine tasks such as document review, legal research, eDiscovery, and contract analysis. Experts anticipate the following benefits.

Improved Legal Research: AI can analyse vast amounts of legal documents, regulations, precedents, and case law, helping lawyers identify relevant precedents and arguments much faster. This can save significant time and effort compared to traditional research methods.

Contract analysis and enhanced due diligence: AI can sift through contracts and financial records, highlighting potential risks and areas requiring closer scrutiny during due diligence processes. This typically can be a time-consuming task for lawyers, where AI can do it very fast. Add to that that it can improve the accuracy and efficiency of legal reviews.

Legal document analysis, review, and drafting assistance: AI-powered tools can help lawyers draft legal documents by suggesting language, identifying inconsistencies, and ensuring compliance with regulations. AI models can also be trained to analyse and extract relevant information from large volumes of legal documents, contracts, and case files. The computational power of AI PCs can speed up this process significantly.

Predictive analytics: with the help of AI PCs, lawyers can develop predictive models to analyse the potential outcomes of legal cases based on historical data and various factors.

Natural language processing (NLP): AI PCs can be used to train and deploy NLP models for tasks like legal document summarization, information extraction, and sentiment analysis.

Challenges and limitations for legal work

At present, however, AI computers are still facing some challenges and limitations when it comes to legal work. While AI PCs can provide computational advantages, many legal applications may not require the full power of these specialized systems. For routine legal work, such as drafting documents or conducting basic research, regular desktop or laptop computers might suffice.

AI computers still have limited judgment and creativity. The core tasks of lawyers often involve legal reasoning, strategy, and creative problem-solving, areas where AI is still not very advanced. AI PCs can’t replace a lawyer’s ability to analyse complex situations, develop persuasive arguments, or adapt to unexpected circumstances in court.

There also is the issue of data dependence and accuracy: the effectiveness of AI tools heavily relies on the quality and completeness of the data they’re trained on. Legal data can be complex and nuanced, and errors in the data can lead to inaccurate or misleading results.

The benefits may not justify the higher costs. AI PCs can be significantly more expensive than traditional PCs. For lawyers who don’t handle a high volume of complex legal matters that heavily rely on AI-powered research or due diligence, the cost may therefore not be justified.

CONCLUSION

AI PCs can be a valuable tool for lawyers, especially for tasks like legal research and due diligence. However, they shouldn’t be seen as a replacement for human lawyers. AI is best used to augment a lawyer’s skills and expertise, not replace them. And at present, AI computers may be overkill when it comes to day-to-day legal work, where existing computers can handle the workload and the extra cost of an AI pc is not justified.

It is also important to consider that the technology used in AI computers is a new and evolving tech. AI PCs are a relatively new concept, and the functionalities are still under development. The “killer application” that justifies the potentially higher cost might not be here yet. Add to that, that to fully benefit from AI features, you’ll need compatible software that can leverage the AI capabilities of your PC.

The decision to invest in AI PCs for legal work would depend on factors such as the specific use cases, the volume of data or workload, the complexity of the AI models required, and the potential return on investment. Law firms or legal departments with a significant focus on AI-driven legal technologies may find AI PCs more beneficial than those with more traditional workflows. But for many lawyers, a traditional PC with good legal research software might still be the most practical solution.

 

Sources:

 

The EU AI Act

In previous articles, we discussed the dangers of AI and the need for AI regulations. On 13 March 2024, the European Parliament approved the “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.” The act was a proposal by the EU Commission of 21 April 2021. The act is usually referred to by its short name of the Artificial Intelligence Act, or the EU AI Act. In this article we look at the following questions: what is the EU AI Act? What is the philosophy of the EU AI Act? We discuss the limited risk applications and the high-risk applications. Finally, we also look at the EU AI Act’s entry into force and the penalties.

What is the EU AI Act?

As the full title suggests, it is a regulation that lays down harmonised rules on artificial intelligence across the EU. Rather than focusing on specific applications, it deals with the risks that applications pose, and categorizes them accordingly. The Act imposes stringent requirements for high-risk categories to ensure safety and fundamental rights are upheld. The Act’s recent endorsement follows a political agreement reached in December 2023, reflecting the EU’s commitment to a balanced approach that fosters innovation while addressing ethical concerns.

Philosophy of the EU AI Act

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular for small and medium-sized enterprises (SMEs). The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

The AI Act ensures that Europeans can trust what AI has to offer. Because AI applications and frameworks can change rapidly, the EU chose to address the risks that applications pose. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes. The Act distinguishes four levels of risk:

  • Unacceptable risk: applications with unacceptable risk are never allowed. All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
  • High risk: to be allowed high risk applications must meet stringent requirements to ensure safety and fundamental rights are upheld. We have a look at those below.
  • Limited risk: applications are considered to pose limited risk when they lack transparency, and the users of the application may not know what their data are used for or what that usage implies. Limited risk applications can be allowed if they comply with specific transparency obligations.
  • Minimal or no risk: The AI Act allows the free use of minimal-risk and no risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

Let us have a closer look at the limited and high-risk applications.

Limited Risk Applications

As mentioned, limited risk refers to the risks associated with a lack of transparency. The AI Act introduces specific obligations to ensure that humans are sufficiently informed when necessary. E.g., when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.

High Risk Applications

Under the EU AI Act, high-risk AI systems are subject to strict regulatory obligations due to their potential impact on safety and fundamental rights.

What are high risk applications?

As mentioned, all AI systems considered a clear threat to the safety, livelihoods and rights of people are considered high-risk. These systems are classified into three main categories: a) those covered by EU harmonisation legislation, b) those that are safety components of certain products, and c) those listed as involving high-risk uses. Specifically, high-risk AI includes applications in critical infrastructure, such as traffic control and utilities management, biometric and emotion recognition systems. It also applies to AI used in education and employment for decision-making processes.

What are the requirements for high-risk applications?

High-risk AI systems under the EU AI Act are subject to a comprehensive set of requirements designed to ensure their safety, transparency, and compliance with EU standards. These systems must have a risk management system in place to assess and mitigate potential risks throughout the AI system’s lifecycle. Data governance and management practices are crucial to ensure the quality and integrity of the data used by the AI, including provisions for data protection and privacy. Providers must also create detailed technical documentation that covers all aspects of the AI system, from its design to deployment and maintenance.

Furthermore, high-risk AI systems require robust record-keeping mechanisms to trace the AI’s decision-making process. This is essential for accountability and auditing purposes. Transparency is another key requirement, necessitating clear and accessible information to be provided to users and ensuring they understand the AI system’s capabilities and limitations. Human oversight is mandated to ensure that AI systems do not operate autonomously without human intervention, particularly in critical decision-making processes. Lastly, these systems must demonstrate a high level of accuracy, robustness, and cybersecurity to prevent errors and protect against threats.

The EU AI Act’s entry into force

The act will enter into force two years after it was approved, i.e., on 13 March 2026. This gives member states the opportunity to implement compliant legislation. It also gives providers a two-year window to adapt to the regulation.

The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation with the member states.

Penalties

The EU AI Act enforces a tiered penalty system to ensure compliance with its regulations. For the most severe violations, particularly those involving prohibited AI systems, fines can reach up to €35 million or 7% of the company’s worldwide annual turnover, whichever is higher. Lesser offenses, such as providing incorrect, incomplete, or misleading information to authorities, may result in penalties up to €7.5 million or 1% of the total worldwide annual turnover. These fines are designed to be proportionate to the nature of the infringement and the size of the entity, reflecting the seriousness of non-compliance within the AI sector.

Conclusion

The EU AI Act represents a significant step in the regulation of artificial intelligence within the EU. It sets a precedent as the first comprehensive legal framework on AI worldwide. The Act mandates a high level of diligence, including risk assessment, data quality, transparency, human oversight, and accuracy for high-risk systems. Providers and deployers must also adhere to strict requirements regarding registration, quality management, monitoring, record-keeping, and incident reporting. This framework aims to ensure that AI systems operate safely, ethically, and transparently within the EU. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.

 

Sources:

 

The 2024 law firm

Usually, at the beginning of a new year, we look back at the trends in legal technology of the last year. Unfortunately, the reports that are needed to do that are not available yet. So, instead, with Lamiroy Consulting turning 30 in February 2024, we will have a look at the 2024 law firm, and at how law firms have evolved over the last decades. We will discuss a range of topics concerning technology and automation in the 2024 law firm, including artificial intelligence, communications. We will see how the cloud, remote work, and virtual law offices have changed law firms, etc.

Technology and automation in the 2024 law firm

Let us go back in time. The early 80s saw the introduction of the first personal computers and home computers. Word Processors had been around since a few years before that. They were found in only a very small minority of law firms at the time. The Internet as we know it, did not exist yet. By the time 1994 came, things had started to change, and a legal technology revolution was on the horizon. Fast forward to 2024. Law firms that don’t use computers or equivalent mobile devices are an endangered – if not extinct – species. Many operational processes in the law firm have been automated.

So, it is safe to say that over the past decades, technology and automation have transformed the legal industry in many ways. According to a report by The Law Society, some of the factors that have contributed to this transformation include increasing workloads and complexity of work, changing demographic mix of lawyers, and greater client pressure on costs and speed.

Two of the most significant changes has been the introduction of the internet with its cloud technologies and of Artificial Intelligence (AI). Most of the evolutions described below have been made possible by the Internet.

Artificial Intelligence

The introduction of Artificial Intelligence (AI) has been one of the main factors driving a substantial transformation of the legal industry. One of its main benefits has been that it notably improved attorney efficiency. AI plays a key role in tasks such as sophisticated writing, research, and document drafting, significantly expediting processes that traditionally could take weeks.

Communications

Law firms have moved from traditional methods of communication such as snail mail to more modern methods. These include email, client portals, and cloud-based communications, like Teams and SharePoint. They allow people to share documents with different levels of permissions, ranging from reading and commenting to editing.

Client portals have become increasingly popular in recent years, allowing clients to access their legal documents and communicate with their attorneys in real-time. This has made it easier for clients to stay informed about their cases and has improved the efficiency of law firms.

Cloud, Remote work, and Virtual Law Office

The legal industry has experienced a notable surge in remote work and virtual law offices. Much of this has been particularly accelerated by the COVID-19 pandemic. Virtual law offices, facilitated by cloud-based practice management software, offer attorneys the flexibility to work from anywhere, leading to increased flexibility and reduced overhead costs for law firms. The cloud has played a crucial role in this shift. It allows virtual lawyers to run fully functional law firms on any device with significantly lower costs compared to on-premise solutions.

Digital Marketing and Online Presence

The legal industry has also witnessed major changes in its marketing practices over the past decades, adapting to the internet era. Recent studies indicates that one-third of potential clients initiate their search for an attorney online. This emphasizes the importance of a strong online presence for law firms to stay competitive. Law firms now prioritize digital marketing through channels like social media, email, SEO, and websites. Whether marketing the entire firm or individual lawyers, a robust digital strategy is essential for establishing credibility and connecting with potential clients. Personal branding is crucial for individual lawyers, highlighting achievements and values, while law firms should focus on building trust through a comprehensive digital marketing strategy.

Billing and Financial Changes in the 2024 Law Firm

Another area where the legal industry has undergone significant changes is in billing and financial practices. In the past, law firms relied on traditional billing methods such as paper invoices and checks. However, with the advent of technology, law firms have shifted to digital billing methods such as electronic invoices and online payment systems. This has made the billing process more efficient and streamlined. In addition to digital billing methods, law firms have also adopted new financial practices such as trust accounting. Trust accounting is a method of accounting that is used to manage funds that are held in trust for clients. This is particularly important for law firms that handle client funds, such as personal injury or real estate law firms.

Over the last decades, alternative fee arrangements (AFAs) have also significantly impacted the legal industry. They did so by offering pricing models distinct from traditional hourly billing. AFAs, including fixed fees, contingency fees, and blended rates, have gained popularity as clients seek greater transparency and predictability in legal fees. A recent study identified 22 law firms excelling in integrating AFAs into their service delivery, earning praise from clients for improved pricing and value. The study underscores a client preference for firms providing enhanced pricing and value. This emphasizes how AFAs not only contribute to better relationships between law firms and clients but also demonstrate the firms’ commitment, fostering trust and credibility.

Legal Research and Analytics

Legal research and analytics have also been revolutionised over the last decades, making the process more efficient and accessible. We have seen primary and secondary legal research publications become available online. Facilitated by information and communication technologies, this has replaced traditional storage methods like compact discs or print media. This shift has not only increased accessibility but also allowed legal professionals to conduct more comprehensive and accurate research. Furthermore, the emergence of legal analytics has empowered professionals to enhance legal strategy, resource management, and matter forecasting by identifying patterns and trends in legal data.

Client Expectations

Another notable change is that clients’ expectations of lawyers have evolved significantly. A recent survey highlights that 79% of legal clients consider efficiency and productivity crucial, indicating a demand for more effective legal services. Additionally, clients now expect increased accessibility and responsiveness from their lawyers, prompting law firms to integrate new technologies such as client portals and online communication tools. Transparency in fees and billing practices is also a priority for clients, leading to the growing adoption of alternative fee arrangements by law firms. (Cf. above).

Globalization

Finally, globalization has significantly impacted the legal industry. It forced law firms to adapt to a changing global landscape and heightened demand for legal services across borders. Many European law firms, these days, are members of some international legal network, with branches in many EU countries. And a recent study highlights the emergence of a new corporate legal ecosystem in emerging economies like India, Brazil, and China. This presents opportunities for law firms to expand globally. In response to the globalization of business law and the increasing demand from transnational companies, law firms are transforming their practices. They do so by merging across borders and creating mega practices with professionals spanning multiple countries. This shift has prompted the adoption of new managerial practices and strategies to effectively manage global operations within these law firms.

Sources:

Microsoft SharePoint Syntex

In this blog post, we will explore Microsoft SharePoint Syntex. We focus on the following questions: What is Microsoft SharePoint Syntex? What can it do? Is Microsoft SharePoint Syntex already available? What are the benefits of Microsoft SharePoint Syntex? And what are the benefits for lawyers?

In the last year, generative AI has been making headlines. (See, e.g., our articles on ChatGPT for lawyers, on Generative AI, and on the dangers of AI). Many software companies have started integrating generative AI into their products and services. Microsoft is no exception. Two of their new generative AI services stand out: CoPilot and SharePoint Syntex. This article is about SharePoint Syntex. Our next article will be about CoPilot.

What is Microsoft SharePoint Syntex?

So, what is Microsoft’s SharePoint Syntex? It is a new product that uses advanced AI and machine teaching to help you capture, manage, and reuse your content more effectively. As the name suggests, it is in essence an add-on feature for SharePoint. (Our blog also has an article on using SharePoint in law firms). Once it is installed, it can be used in some other programs as well. (See below).

Microsoft describes SharePoint Syntex as a content understanding, processing, and compliance service. It uses intelligent document processing, content artificial intelligence (AI), and advanced machine learning. This allows it to automatically and thoughtfully find, organize, and classify documents in your SharePoint libraries, Microsoft Teams, OneDrive for Business, and Exchange. With Syntex, you can automate your content-based processes—capturing the information in your law firm’s documents and transforming that information into working knowledge.

Syntex is the first product from Project Cortex. That is a Microsoft 365 initiative that aims to empower people with knowledge and expertise in the apps they use every day.

What can it do?

Microsoft Syntex offers several services and features to help you enhance the value of your content, build content-centric apps, and manage content at scale. Some of the main services and features are:

Content assembly: You can automatically generate standard repetitive business documents, such as contracts, statements of work, service agreements, letters of consent, and correspondence. You can do all these tasks quicker, more consistently, and with fewer errors in Syntex. You create modern templates based on the documents you use most. You then use those templates to automatically generate new documents using SharePoint lists or user entries as a data source.

Prebuilt document processing: You can use a prebuilt model to save time processing and extracting information from contracts, invoices, or receipts. Prebuilt models are pretrained to recognize common documents and the structured information in the documents. Instead of having to create a new document processing model from scratch, you can use a prebuilt model to jumpstart your document project.

Structured and freeform document processing: You can use a structured model to automatically identify field and table values. It works best for structured or semi-structured documents, such as forms and invoices. You can use a freeform model to automatically extract information from unstructured and freeform documents, such as letters and contracts where the information can appear anywhere in the document. Both structured and freeform models use Microsoft Power Apps AI Builder to create and train models within Syntex.

Content AI: You can understand and gather content with AI-powered summarization, translation, auto-assembly, and annotations incorporated into Microsoft 365 and Teams.

Content apps: You can extend and develop content apps with high-volume containers, data, and rich APIs.

Content management: You can analyse and protect content through its lifecycle with AI powered security and compliance, backup/restore and advanced content management.

Is Microsoft SharePoint Syntex already available?

SharePoint Syntex was released on 1 October 2023, and is available in all countries where Microsoft 365 is offered. So, if you are a CICERO LawPack user, you can start using it already. But note that there are some differences in the availability of languages and pricing for SharePoint Syntex in Europe.

SharePoint Syntex supports 21 languages for document understanding models and 63 languages for form processing models. (The article in the Microsoft Tech Community on the availability, which is listed in the sources below, has the full list of supported languages). All languages in which Microsoft 365 is available in Europe are available for Syntex within Europe. This does not mean, however, that all languages are available in all regions. For example, some languages are only available in the US region, such as Arabic, Hebrew, Hindi, Thai, and Vietnamese.

The pricing of SharePoint Syntex depends on the type of licensing and the number of transactions, as well as on the region and currency. There are two options for licensing: per-user and pay-as-you-go. Per-user licensing costs $5 per user per month in the US and allows unlimited usage of Syntex services. The price in EUR may differ depending on the exchange rate and local taxes. Pay-as-you-go licensing charges based on the total number of pages processed by Syntex, with different rates for unstructured, structured, and prebuilt document processing.

According to the Microsoft website, the price of SharePoint Syntex in Belgium is €7,90 per user per month for per-user licensing, and €0,04 per transaction for unstructured document processing, €0,01 per transaction for prebuilt document processing, and €0,04 per transaction for structured and freeform document processing for pay-as-you-go licensing. These prices do not include VAT and may vary depending on the currency exchange rate and the Azure subscription plan. You can find the exact price of SharePoint Syntex in your region and currency on the Microsoft 365 Enterprise Licensing page (listed below in the sources).

What are the benefits of Microsoft SharePoint Syntex?

Microsoft Syntex can help your law firm automate business processes, improve search accuracy, and manage compliance risk. With content AI services and capabilities, you can build content understanding and classification directly into the content management flow. Some of the benefits of using Microsoft Syntex are:

Increased productivity: Your law firm can save time and resources by automating repetitive tasks such as document generation, extraction, classification, tagging, indexing, summarization, translation, etc. You can also access your content faster and easier by using intelligent search capabilities that leverage metadata and AI insights.

Improved quality: You can reduce errors and inconsistencies by using standardized templates, prebuilt models, or custom models that suit your specific needs. You can also ensure that your content is accurate, relevant, and up to date by using AI-powered analytics and feedback mechanisms.

Enhanced security: You can protect your sensitive data by using AI-powered security and compliance features that help you identify risks, apply policies, enforce retention rules, monitor activity, audit changes, etc. You can also backup and restore your content in case of accidental deletion or corruption.

What are the benefits for lawyers?

For lawyers in particular, Microsoft Syntex can offer some additional benefits that can help them streamline their legal workflows, improve their client service, and reduce their liability exposure.

Faster contract review: Lawyers can use prebuilt or custom models to automatically extract key information from contracts such as parties, clauses, terms, dates, amounts, etc. They can also use content assembly to automatically generate contracts based on templates and data sources. This can help them speed up their contract review process, avoid missing important details or deadlines, and ensure consistency across their contracts.

Easier knowledge management: Your law firm can use content AI to automatically summarize, translate, annotate, tag, index their legal documents such as cases, opinions, briefs, memos etc. They can also use intelligent search to quickly find relevant information across their SharePoint libraries or Teams channels. This can help them manage their legal knowledge more effectively, access the information they need when they need it, and share it with their colleagues or clients.

Better compliance and risk management: It is possible to use content management to automatically apply security and compliance policies to their legal documents based on their sensitivity, confidentiality, or retention requirements. Lawyers can also use AI-powered analytics and monitoring to identify potential issues, conflicts, or breaches in their documents and take appropriate actions. This can help them comply with their ethical and legal obligations, protect their client’s interests, and reduce their liability exposure.

 

Sources:

 

Ambient Computing for lawyers

In our previous article, we discussed ambient computing: what it is, and what the benefits and challenges are. In this article we discuss what the relevance of ambient computing is for lawyers. We look at ambient law, which deals with the legal aspects of ambient computing. Then we ask ourselves, “what are the benefits of ambient computing for lawyers?”, and “what are the challenges?”. But first we start with a short recap on what ambient computing is.

A short recap: what is ambient computing?

In our previous article, we explained that “ambient computing is the idea of embedding computing power into everyday objects and environments, to make them smart, connected, and responsive. The goal is to make it easier for users to take full advantage of technology without having to worry about the details. (…) Ambient computing relies on a variety of technologies, such as sensors, artificial intelligence, cloud computing, voice recognition, gesture control, and wearable devices, to create a seamless and personalized user experience. Ambient computing devices are designed to be unobtrusive and blend into the background, so that users can focus on their tasks and goals rather than on the technology itself.” As such, the concept of ambient computing is closely related to the concept of the Internet of Things.

Examples of ambient computing technology are found in smart homes, cars, business premises, as well as other domains, such as health care, education, entertainment, and transportation, etc.

So, now that we know what ambient computing is, we can focus on the next question: what does ambient computing mean for lawyers and the legal profession? Three items come to mind: what are the legal aspects of ambient computing? What are the benefits for lawyers? What are the risks and challenges for lawyers?

Ambient Law: the legal aspects of ambient computing

When the Internet of Things was starting to take form, the term ambient law was introduced to refer to the legal aspects of using ambient technology. There are four main areas where legal issue can arise, and we can pair them in two sets of two. On the one hand, there is data privacy and security. On the other hand, there is liability and accountability.

Data Privacy and Security

Ambient computing involves collecting, processing, and sharing large amounts of personal and sensitive data from various sources and devices, which raises significant privacy and security concerns.

Privacy: In our previous article we wrote that ambient computing collects vast amounts of data about users’ behaviour, preferences, location, health, and more. This data can be used for beneficial purposes, such as improving services and personalization. But it can also be misused or compromised by malicious actors or third parties. Or they can be sold to third parties, often without the users’ knowledge or consent. Many car manufacturers, e.g., are guilty of this.

In this context, it is worth referring to the SWAMI project, which stands for Safeguards in a World of Ambient Intelligence. This project took a precautionary approach in its exploration of the privacy risks in Ambient Intelligence (AmI) and sought ways to reduce those risks.

The project discovered that several “dark scenarios” where possible that would have negative implications for privacy protection. It identified various threats and vulnerabilities. Legal analysis of these scenarios also showed there are shortcomings in the current legal framework and that the current legislation cannot provide adequate privacy protection in the AmI environment.

The Project concluded that a new approach to privacy and data protection is needed, based on control and responsibility rather than on restriction and prohibition.

Security: Again, there are several aspects to the security side of ambient computing. On the one hand, all the personal data it collects must be protected. Another side is that in essence each new ambient device increases the security risk. Ambient technologies can expose users’ devices and data to cyberattacks or physical tampering. This can compromise users’ safety and functionality of their devices. Cars and baby monitors, e.g., appear to be easy targets for hackers.

There have been initiatives already to tackle the possible security risks inherent in ambient computing. Relevant data security laws generally focus on data protection, cybersecurity, cross-border data transfers, the rights of the data subject, and on penalties for non-compliance.

Liability and accountability

The other two aspects are legal liability and accountability: Ambient computing involves delegating some decisions and actions to autonomous agents or systems that may not be fully transparent or predictable. This raises questions about who is responsible and liable for the outcomes or consequences of those decisions or actions, especially when they cause harm or damage to others. (In a previous article, we looked at robot law and looked at who would be responsible for a robot’s actions: is it the robot, the owner, or the manufacturer?)

As we are dealing with new technologies that are literally all around us, legal liability and accountability in ambient computing are complex issues.

What are the benefits of ambient computing for lawyers?

In our previous article, we highlighted some general benefits of ambient computing. These include convenience, efficiency, engagement, and empowerment. More specifically for lawyers, ambient computing can offer three groups of benefits.

A first set of benefits has to do with improving productivity and efficiency. Ambient computing technology can automate and streamline many routine tasks and processes that lawyers perform. Some law firm management software can already be voice controlled and work together with artificial assistants. (Our article on virtual legal assistants discusses this, too).

Ambient computing can also enhance client experience and satisfaction. It can enable lawyers to provide more personalized, responsive, and proactive service to their clients, by leveraging data and insights from various sources and devices.

Finally, ambient computing can open up new business models and opportunities. It can create new types of services, products, and platforms that leverage ambient intelligence and connectivity.

What are the challenges?

Ambient computing also poses some challenges and risks for lawyers, including the ones we already mentioned above when talking about ambient law.

When it comes to protecting data privacy and security, lawyers have a duty to protect the confidentiality and integrity of their clients’ data, as well as their own data. Therefore, they need to ensure that they comply with the applicable laws and regulations on data protection, such as the GDPR. They also must make sure their ambient technology complies with the ethical standards and best practices of their profession. Furthermore, they need to be aware of the potential threats and vulnerabilities that ambient computing introduces, such as data breaches, cyberattacks, unauthorized access, etc., and take appropriate measures to prevent or mitigate them.

For lawyers, too, there are aspects of legal liability and accountability. Ambient computing involves delegating some decisions and actions to autonomous agents or systems that may not be fully transparent or predictable. This raises questions about who is responsible and liable for the outcomes or consequences of those decisions or actions, especially when they cause harm or damage to others. Lawyers need to understand the legal implications and risks of using ambient computing in their practice or advising their clients on it. They also need to ensure that they have adequate contracts, policies, insurance, etc., to cover any potential liability or claims that may arise from ambient computing.

Finally, ambient computing may force lawyers to adapt to changing roles and skills. Ambient computing may disrupt or transform some aspects of the legal profession or industry, by creating new demands or expectations from clients or stakeholders. Lawyers need to be prepared to adapt to these changes and embrace new roles and skills that ambient computing requires or enables. For example, they may need to become more tech-savvy or data-driven, collaborate more with other professionals or disciplines, or specialize in new areas or domains related to ambient computing.

Conclusion

Ambient computing is an emerging trend that has significant implications for lawyers and the legal profession. Ambient computing can offer many benefits for lawyers who want to improve their practice and service delivery. However, it also poses some challenges and risks that lawyers need to address carefully. Lawyers who want to embrace ambient computing need to be aware of the legal and regulatory aspects of ambient computing in their jurisdiction or context. They also need to be proactive in learning and adopting the best practices and tools that ambient computing provides or demands.

Sources:

Virtual Legal Assistants

In this article, we discuss virtual legal assistants (VLAs). We answer questions like, “What are virtual legal assistants?”, “What services do they offer?”, “What are the benefits of using virtual legal assistants?”, and “What are the limitations?”. We also have a look at some statistics.

What are virtual legal assistants?

When we read articles on virtual legal assistants, we discover that the term is used in different ways. Some definitions restrict it to physical persons who work remotely and to whom purely administrative tasks are outsourced. Most authors also include the work of (remote) paralegals, while others also include all the services offered by third-party (or alternative) legal service providers who may use AI-powered or technology-driven platforms like bots. So, in its widest sense, a virtual legal assistant is an assistant that remotely assists lawyers, law firms, and legal professionals with various tasks and processes. They typically work as subcontractors for the law firm.

What services do they offer?

Virtual legal assistants offer a wide range of services. They can streamline and facilitate client communications and interactions. E.g., they can answer frequently asked questions and provide updates on case status, while maintaining confidentiality and security. They can personalize client engagement. And if you work with VLA bots and/or with physical people in different locations, they can guarantee 24/7 availability and instant responses.

Virtual legal assistants can enhance legal research. They can help lawyers find relevant case law, statutes, regulations, and legal articles to support their arguments and build stronger cases. They provide instant access to legal knowledge.

VLAs can assist in drafting legal documents such as contracts, agreements, pleadings, and other legal correspondence, often by generating templates or suggesting content based on context. One area where VLA bots have been proven very useful is contract review. They can review contracts, highlight important clauses, identify potential risks, and ensure compliance with relevant laws.

Virtual legal assistants also contribute to facilitating and optimizing case management and workflow. They can organize and manage case-related information, deadlines, and tasks, streamlining the workflow for lawyers and legal teams. VLA bots can provide automated case updates.

Other areas where VLAs are useful include bookkeeping, billing, and time tracking. They can help lawyers track billable hours and manage invoicing for clients.

You can also hire a VLA for data entry.

There is the aspect of due diligence, as well. Virtual legal assistants can assist in conducting due diligence for mergers, acquisitions, or other transactions by analysing legal and financial data.

VLA bots are also useful for legal analytics. They can analyse large sets of legal data and provide insights into trends, patterns, and potential outcomes.

Finally, there is E-discovery. VLA bots can help with the process of identifying, collecting, and analysing electronically stored information (ESI) for litigation purposes.

Some statistics

There are plenty of interesting statistics available when it comes to virtual assistants. Here is a selection.

  • Virtual assistants can decrease operating costs by up to 78%.
  • Investing in virtual assistants cuts the attrition by 50%. (The attrition rate pertains to the number of people resigning from an organization over a period of time).
  • Virtual assistants increase productivity by 13%.
  • According to a survey by the American Bar Association, 26% of lawyers use virtual assistants or paralegals. The 2020 Legal Trends Report found that law firms only spend an average of 2.5 hours each day on billable work, which can be improved by delegating work to legal virtual assistants.
  • A study by the University of Oxford found that 23% of legal work can be automated by existing technology, and that virtual assistants can handle tasks such as document review, contract analysis, due diligence, and research.
  • A report by Deloitte estimated that 39% of legal jobs will be replaced by automation, and that virtual assistants will play a key role in enhancing productivity, efficiency, and accuracy.
  • A survey by LawGeex found that virtual assistants can review contracts faster and more accurately than human lawyers. The average accuracy rate for virtual assistants was 94%, compared to 85% for human lawyers. The average time for virtual assistants to review a contract was 26 seconds, compared to 92 minutes for human lawyers.
  • According to Gartner, by 2023, virtual legal assistants (VLAs) will field 25% of internal requests to legal departments at large enterprises, increasing operational capacity for in-house corporate teams.
  • A survey by Virtudesk found that 82% of business owners who hired virtual assistants reported increased productivity and efficiency, and 78% said they saved money on operational costs.

What are the benefits of using virtual legal assistants?

We listed the tasks virtual legal assistants can do above. By delegating these tasks to a virtual legal assistant, you can free up your time and focus on the core aspects of your practice, such as strategy, advocacy, and client relations. As such, they increase efficiency and productivity.

You can also reduce your overhead costs, as you only pay for the services you need, when you need them. You don’t have to worry about hiring, training, supervising, or providing benefits to an in-house staff member. In other words, they can also be a more cost-effective solution compared to hiring additional staff for administrative tasks. (Cf. the statistics quoted above).

A virtual legal assistant can also offer you flexibility and convenience, as they can work from anywhere and often at any time. You can access their services on demand, without being limited by office hours or location. You can also communicate with them through various channels, such as phone, email, chat, or video conferencing. VLA bots work 24/7.

There also is the access to technology aspect. AI virtual legal assistants can automate repetitive tasks. They can leverage advanced AI and technology and may provide access to powerful tools that may not be affordable or available to smaller law firms.

Virtual legal assistants increase accuracy. Especially AI-driven assistants can often perform tasks with a high level of accuracy and consistency, reducing the likelihood of human errors. (Cf. our article on when lawyers and robots compete).

Scalability is another benefit. Working with VLAs allow you to easily adapt to the changing needs of your law practice, whether it’s handling increased workloads during busy periods or scaling down during quieter times.

What are the limitations?

While virtual legal assistants can be valuable tools, they are not meant to replace human lawyers. Instead, they complement legal professionals by enhancing their productivity and efficiency. It’s essential to consider the specific needs of the law practice and the capabilities of the virtual legal assistant platform before making a choice. They are meant to assist lawyers, not replace them.

Another thing to keep in mind is that at present most of the VLA bots are only available in English.

Conclusion

The use of virtual legal assistants is on the rise, and that should not come as a surprise. They boost efficiency, productivity, are cost-effective, and allow lawyers to focus on legal work.

 

Sources:

Statistics

The dangers of artificial intelligence

Artificial intelligence (AI) is a powerful technology that can bring many benefits to society. However, AI also poses significant risks and challenges that need to be addressed with caution and responsibility. In this article, we explore the questions, “What are the dangers of artificial intelligence?”, and “Does regulation offer a solution?”

The possible dangers of artificial intelligence have been making headlines lately. First, Elon Musk and several experts called for a pause in the development of AI. They were concerned that we could lose control over AI considering how much progress has been made recently. They expressed their worries that AI could pose a genuine risk to society. A second group of experts, however, replied that Musk and his companions were severely overstating the risks involved and labelled them “needlessly alarmist”. But then a third group again warned of the dangers of artificial intelligence. This third group included people like Geoffrey Hinton, who has been called the godfather of AI. They even explicitly stated that AI could lead to the extinction of humankind.

Since those three groups stated their views, many articles have been written about the dangers of AI. And the calls to regulate AI have become louder than ever before. (We published an article on initiatives to regulate AI in October 2022). Several countries have started taking initiatives.

What are the dangers of artificial intelligence?

So, what are the dangers of artificial intelligence? As with any powerful technology, it can be used for nefarious purposes. It can be weaponized and used for criminal purposes. But even the proper use of AI holds inherent risks and can lead to unwanted consequences. Let us have a closer look.

A lot of attention has already been paid in the media to the errors, misinformation, and hallucinations of artificial intelligence. Tools like ChatGPT are programmed to sound convincing, not to be accurate. It gets its information from the Internet, but the Internet contains a lot of information that is not correct. So, its answers will reflect this. Worse, because it is programmed to provide any answer if it can, it sometimes just makes things up. Such instances have been called hallucinations. In a lawsuit in the US, e.g., a lawyer had to admit that the precedents he had quoted did not exist and were fabricated by ChatGPT. (In a previous article on ChatGPT, we warned that any legal feedback it gives must be double-checked).

As soon as ChatGPT became available, cybercriminals started using it to their advantage. A second set of dangers therefore has to do with cybercrime and cybersecurity threats: AI can be exploited by malicious actors to launch sophisticated cyberattacks. This includes using AI algorithms to automate and enhance hacking techniques, identify vulnerabilities, and breach security systems. Phishing attacks have also become more sophisticated and harder to detect.

AI can also be used for cyber espionage and surveillance: AI can be employed for sophisticated cyber espionage activities, including intelligence gathering, surveillance, and intrusion into critical systems. Related to this is the risk of invasion of privacy and data manipulation. AI can collect and analyse massive amounts of personal data from various sources, such as social media, cameras, sensors, and biometrics. This can enable AI to infer sensitive information about people’s identities, preferences, behaviours, and emotions. AI can also use this data to track and monitor people’s movements, activities, and interactions. This can pose threats to human rights, such as freedom of expression, association, and assembly.

Increased usage of AI will also lead to the loss of jobs due to automation. AI can perform many tasks faster and cheaper than humans, which will lead to unemployment and inequality. An article on ZD Net estimates that AI could automate 300 million jobs. Approximately 28% of current jobs could be at risk.

There also is a risk of loss of control. As AI systems become more powerful, there is a risk that we will lose control over them. This could lead to AI systems making decisions that are harmful to humans, such as launching a nuclear attack or starting a war. This risk of the loss of control is a major concern about the weaponization of AI. As AI technology advances, there is a worry that it could be weaponized by state or non-state actors. Autonomous weapon systems equipped with AI could potentially make lethal decisions without human intervention, leading to significant ethical and humanitarian concerns.

We already mentioned errors, misinformation, and hallucinations. Those are involuntary side-effects of AI.  A related danger of AI is the deliberate manipulation and misinformation of society through algorithms. AI can generate realistic and persuasive content, such as deepfakes, fake news, and propaganda, that can influence people’s opinions and behaviours. AI can also exploit people’s psychological biases and preferences to manipulate their choices and actions, such as online shopping, voting, and dating.

Generative AI tends to use existing data as its basis for creating new content. But this can cause issues of infringement of intellectual property rights. (We briefly discussed this in our article on generative AI).

Another risk inherent to the fact that AI learns from large datasets is bias and discrimination. If this data contains biases, then AI can amplify and perpetuate them. This poses a significant danger in areas such as hiring practices, lending decisions, and law enforcement, where biased AI systems can lead to unfair outcomes. And if AI technologies are not accessible or affordable for all, they could exacerbate existing social and economic inequalities.

Related to this are ethical implications. As AI systems become more sophisticated, they may face ethical dilemmas, such as decisions involving human life or the prioritization of certain values. Think, e.g., of self-driving vehicles when an accident cannot be avoided: do you sacrifice the driver if it means saving more lives? It is crucial to establish ethical frameworks and guidelines for the development and deployment of AI technologies. Encouraging interdisciplinary collaboration among experts in technology, ethics, and philosophy can help navigate these complex ethical challenges.

At present, there is insufficient regulation regarding the accountability and transparency of AI. As AI becomes increasingly autonomous, accountability and transparency become essential to address the potential unintended consequences of AI. In a previous article on robot law, we asked the question who is accountable when, e.g., a robot causes an accident. Is it the manufacturer, the owner, or – as AI becomes more and more self-aware – could it be the robot? Similarly, when ChatGPT provides false information, who is liable? In the US, Georgia radio host Mark Walters found that ChatGPT was spreading false information about him, accusing him of embezzling money. So, he is suing OpenAI, the creators of ChatGPT.

As the abovementioned example of the lawyer quoting non-existing precedents illustrated, there also is a risk of dependence and overreliance: Relying too heavily on AI systems without proper understanding or human oversight can lead to errors, system failures, or the loss of critical skills and knowledge.

Finally, there is the matter of superintelligence that several experts warn about. They claim that the development of highly autonomous AI systems with superintelligence surpassing human capabilities poses a potential existential risk. The ability of such systems to rapidly self-improve and make decisions beyond human comprehension raises concerns about control and ethical implications. Managing this risk requires ongoing interdisciplinary research, collaboration, and open dialogue among experts, policymakers, and society at large. On the other hand, one expert said that it is baseless to automatically assume that superintelligent AI will become destructive, just because it could. Still, the EU initiative includes the requirement of building in a compulsory kill switch that allows to switch the AI off at any given moment.

Does regulation offer a solution?

In recent weeks, several countries have announced initiatives to regulate AI. The EU already had its own initiative. At the end of May, its tech chief Margrethe Vestager said she believed a draft voluntary code of conduct for generative AI could be drawn up “within the next weeks”, with a final proposal for industry to sign up “very, very soon”. The US, Australia, and Singapore also have submitted proposals to regulate AI.

Several of the abovementioned dangers can be addressed through regulation. Let us go over some examples.

Regulations for cybercrime and cybersecurity should emphasize strong cybersecurity measures, encryption standards, and continuous monitoring for AI-driven threats.

To counter cyber espionage and surveillance risks, we need robust cybersecurity practices, advanced threat detection tech, and global cooperation to share intelligence and establish norms against cyber espionage.

Privacy and data protection regulations should enforce strict standards, incentivize secure protocols, and impose severe penalties for breaches, safeguarding individuals and businesses from AI-enabled cybercrime.

To prevent the loss of jobs, societies need to invest in education and training for workers to adapt to the changing labour market and create new opportunities for human-AI collaboration.

Addressing AI weaponization requires international cooperation, open discussions, and establishing norms, treaties, or agreements to prevent uncontrolled development and use of AI in military applications.

To combat deepfakes and propaganda, we must develop ethical standards and regulations for AI content creation and dissemination. Additionally, educating people on critical evaluation and information verification is essential.

Addressing bias and discrimination involves ensuring diverse and representative training data, rigorous bias testing, and transparent processes for auditing and correcting AI systems. Ethical guidelines and regulations should promote fairness, accountability, and inclusivity.

When it comes to accountability and transparency, regulatory frameworks can demand that developers and organizations provide clear explanations of how AI systems make decisions. This enables better understanding, identification of potential biases or errors, and the ability to rectify any unintended consequences.

At the same time, regulation also has its limitations. While it is important, e.g., to regulate things like cybercrime or the weaponization of AI, it is also clear that the regulation will not put an end to these practices. After all, by definition, cybercriminals don’t tend to care about any regulations. And despite the fact that several types of weapons of mass destruction have been outlawed, it is also clear that they are still being produced and used by several actors. But regulation does help to keep the trespassers accountable.

It is also difficult to assess how disruptive the impact of AI will be on society. Depending on how disruptive it is, additional measures may be needed.

Conclusion

We have reached a stage where AI has become so advanced that it will change the world and the way we live. This is already creating issues that need to be addressed. And as with any powerful technology, it can be abused. Those risks, too, need to be addressed. But while we must acknowledge these issues, it should also be clear that the benefits outweigh the risks, as long as we don’t get ahead of ourselves. At present, humans abusing AI are a greater danger than AI itself.

 

Sources: