Client Portals for Law Firms

For a while now, the American Bar Association has been recommending that lawyers use client portals to exchange information with their clients. One of the main arguments is that client portals provide a more secure way to communicate than email is. So, what are client portals, and what benefits do they offer?

In a previous article, we described client portals as a place on the Internet where your clients can view, and possibly edit, their own data, usually with a browser. It allows you to interact with your clients, to share files, to have discussions, chat, plan, organize and manage tasks and events in a private, online, and secure environment. The data are often stored in the cloud (or are accessible via cloud technologies) and are encrypted. The communications between the portal and the client is encrypted as well.

Client portals have several key features. Sharing of information is one of the most important ones. Once a client has access to the client portal, he or she can consult the status of his or her cases. The client can view documents, get overviews of billing and accounting data, of what tasks have already been completed and what tasks are in the agenda, waiting to be executed. As such, clients portals offer greater transparency, as well as an effective way to collaborate.

A second key feature of client portal, and mentioned above, are the secure communications. Because the data on the portal, as well as the exchange of data between the client and the portal are encrypted, the communications are more secure than email exchanges. Google publishes a real time transparency report that keeps track of the amount of email that is not encrypted and can therefore be intercepted and read. It shows that at present, on average one in four emails are not encrypted. For a lawyer, this is important, because sending email over non-secure channels could lead to liability for violation of confidentiality if the mail is intercepted.

A third key feature of clients portals is the tight integration with practice management software. Client portals typically are available as add-ons to existing practice management packages. The practice management software typically will provide an administration backend, which, among other things, incorporates permission management. In it, you can specify who has access to what information, and what they can do with that information, i.e., e.g., whether they can only read information, or whether they can comment, or modify information, etc.

There are different types of client portals. Most common is the regular client portal that is used for messaging and document sharing. Some law firms use client portals that have more advanced document management functionalities, where clients can, e.g., generate legal documents by filling out forms. These forms supply the data that are then merged into templates. There are law firms, too,  who are using project management client portals. An increasing amount of client portals also allows clients to make online payments.

So, when do you need a client portal? You can use one if you want to

  1. share confidential information,
  2. enhance your communications with your clients
  3. accept online payments (though not all portals provide this functionality yet)
  4. improve collaboration, between lawyers, clients and possible other third parties
  5. audit access to the information (i.e., you can keep track of who access what and when)
  6. leverage “anytime anywhere access” to your law firm’s information

So, it’s clear that client portals offer multiple benefits. Apart from the ones already mentioned the increased transparency that client portals offer, also leads to greater client satisfaction and reduces the need for ‘keeping up to date’ communications. The collaboration aspects of client portals increase productivity. Having a client portal can offer also a competitive advantage in that it will appeal to more Internet-savvy clients.

So, what are you waiting for?

Sources:

 

Robot Law

A few months ago, in January 2018, the European Parliament’s Legal Affairs Committee approved a report that outlines a possible legal framework to regulate the interactions between a) humans, and b) robots and Artificial Intelligence systems. The report is quite revolutionary. It proposes, e.g., giving certain types of robots and AI systems personhood, as “electronic persons”: These electronic persons would have rights and obligations, and the report suggests that they should obey Isaac Asimov’s Laws of Robotics. The report also advises that the manufacturers of robots and AI systems should build in a ‘kill switch’ to be able to deactivate them. Another recommendation is that a European Agency for Robotics and AI be established that would be capable of responding to new opportunities and challenges arising from technological advancements in robotics.

The EU is not alone in its desire to regulate AI: similar (though less far reaching) reports were published in Japan and in the UK. These different initiatives are in effect the first attempts at creating Robot Law.

So, what is Robot Law? On the blog of the Michalsons Law Firm, Emma Smith describes Robot Law as covering “a whole variety of issues regarding robots including robotics, AI, driverless cars and drones. It also impacts on many human rights including dignity and privacy.” It deals with the rights and obligations of AI systems, manufacturers, consumers, and the public at large in its relationship to AI and how it is being developed and used. As such, it is different from, and far broader than Asimov’s Laws of Robotics which only apply to laws robots have to obey.

Why would we need Robot Law? For a number of reasons. AI has become an important contributing factor to the transformation of society, and that transformation is happening extremely fast. The AI Revolution is often compared to the Industrial Revolution, but that comparison is partly flawed, because of the speed, scale and pervasiveness of the AI Revolution. Some reports claim that the AI Revolution is happening up to 300 times faster than the Industrial Revolution. This partly has to do with the fact AI is already being used everywhere, and that pervasiveness is only expected to increase rapidly. Think, e.g., of the Internet of Things, where everything is connected to the Internet, and massive amounts of data are being mined.

The usage of AI already raises legal issues of control, privacy, and liability. Further down the line we will be confronted with issues of personhood and Laws of Robotics. But AI also has wide-reaching societal effects. Think, e.g., of the job market and the skill sets that are in demand. These will change dramatically. In the US alone, driverless cars and trucks, e.g., will see a minimum of 3 million drivers lose their jobs. So, yes, there is a need for Robot Law.

Separate from the question of whether we need Robot Law, is the question whether we already need legislation now, and/or how much should be regulated at this stage. When trying to answer that question, we are met with almost diametrically opposing views.

The Nay-sayers claim that it is still too soon to start thinking about Robot Law. The argument is that AI and Robotics are still in their infancy, and at this stage there is a need first to explore and develop it further. Not only are there still too many unanswered questions, but their view is that regulation at this stage could stifle the progress of AI. All we would have to do, is adapt existing laws. In that context, Roger Bickerstaff, e.g., speaks of:

  • Facilitative changes – these are changes to law that are needed to enable the use of AI.
  • Controlling changes – these are changes to law and new laws that may be needed to manage the introduction and scope of operation of robotics and artificial intelligence.
  • Speculative changes – these are the changes to the legal environment that may be needed as robotics and AI start to approach the same level of capacity and capability as human intelligence – what is often referred to as the singularity point.

Others, like the authors of the aforementioned reports, disagree. They argue that there already are issues of privacy, control, and liability. There also is the problem of transparency: how do Neural Networks come to their conclusions, e.g., when they recommend whether somebody is eligible for parole, or a loan, or when they assess risks, e.g., for insurances. How does one effectively appeal against such decisions if it’s not known how the AI system reaches its conclusions? Furthermore, the speed, scale and pervasiveness of the AI Revolution and its societal effects, demand a proactive approach. If we don’t act now, we will soon be faced with problems that we know will arise.

Finally, in his paper, Ryan Calo points out, maybe surprisingly, that there already is over half a century of case law with regard to robots. These cases deal with both robots as objects and robots as subjects. He rightfully points out that “robots tend to blur the lines between person and instrument”. A second, and more alarming insight of his study was “that judges may have a problematically narrow conception of what a robot is”. For that reason alone, it would already be worthwhile to start thinking about Robot Law.

Sources: