An Introduction to Hashtags

What do you call this sign: #? If you’re a digital native (somebody who grew up when the Internet was already around), you’ll probably know it as the hashtag sign. If you’re older, you’ll probably refer to it as the number sign (sometimes also called pound sign), unless you’re into programming or music. In that case, you may read it as ‘sharp’, as in C#. (On a side note, on a regular basis, music teachers express their dismay that young pupils refer to the note C# as ‘C hashtag’, but that’s a different story).

So, what are these hashtags? What are they used for? And why should you care about them? We’ll find out in this article. In a follow-up article we’ll show you to use them to your advantage.

The Wikipedia defines a hashtag as “a type of metadata tag used on social networks such as Twitter and other microblogging services, allowing users to apply dynamic, user-generated tagging which makes it possible for others to easily find messages with a specific theme or content. Users create and use hashtags by placing the number sign or pound sign # usually in front of a word or unspaced phrase in a message. The hashtag may contain letters, digits, and underscores. Searching for that hashtag will yield each message that has been tagged with it. A hashtag archive is consequently collected into a single stream under the same hashtag.”

Hashtags were first used on Twitter in 2007, upon the suggestions of Chris Messina. Adding the #-sign at the front of a word (or group of words) turns it into a clickable, searchable keyword expression. You can search on any topic you like, like, e.g., #ArtificialIntelligence or #Divorce, and you’ll get a list of relevant recent posts on the topic. They are often used for current events, e.g., like the recent #NotreDameFire or #HongKongProtest. If you make a post on a specific topic, you can just add the relevant hashtag and people can easily find your post.

Because hashtags turned out to be so useful and easy to use, they quickly spread to other social media as well. These days, hashtags are used on all major social media platforms like Twitter, LinkedIn, Facebook, Instagram, YouTube, Pinterest, Tumblr, etc. Apart from that, they’re now also used for SEO (Search Engine Optimization) purposes. When you publish an article on LinkedIn, e.g., it suggests and asks for tags. And if conversations on the Internet about a current event are big enough, you can even search for its hashtag on Google and get a live scrolling feed with recent posts. (Some platforms give you live information on which topics are ‘trending’, i.e. are most talked about on that platform).

When and why would you, as a lawyer, use hashtags? There are two sides to this. The first aspect of this is where you do a search on hashtags that others are using to find information. Were you aware that hashtags can be used for legal research, where you can find relevant articles on specific topics? You can even do it on a regular basis to stay informed about recent evolutions in your field of expertise or interest. The second aspect of this is where you start putting hashtags in your posts and articles so others can easily find what you have to say on the matter.

Why are people using hashtags? There are plenty of reasons. Here is a short, not exhaustive, overview:

  • To comment and contribute to a global online conversation. Hashtags provide context and relevance.
  • To stay in touch with your clients and see what they are talking about online (as well as find out what they may be saying about you!).
  • For (legal) research purposes, where they can be used for content discovery and sorting.
  • Hashtags are often used for humour and witty comments. #ButYouDontHaveToTakeMyWordForIt
  • For Business & Marketing purposes, because they are a great way:
    • To build and support your brand
    • To monitor trends and your brand
    • To Boost a marketing campaign
    • To keep in touch with and engage your audience

Mind you, there are some rules to keep in mind when using hashtags. As the Wikipedia pointed out, a hashtag may contain only letters, digits, and underscores. That means “spaces are an absolute no-no. Even if your hashtag contains multiple words, group them all together. If you want to differentiate between words, use capitals instead (#BlueJasmine). Uppercase letters will not alter your search results, so searching for #BlueJasmine will yield the same results as #bluejasmine.” (Mashable). Also forbidden are punctuation marks, so commas, periods, exclamation points, question marks and apostrophes are out. The same applies to asterisks (*), ampersands (&) or any other special characters, all of which can’t be used either.

In a follow-up article, we’ll focus on how to best make use of hashtags.

 

Sources:

 

International Guidelines for Ethical AI

In the last two months, i.e. in April and May 2019, both the EU Commission and the OECD published guidelines for trustworthy and ethical Artificial Intelligence (AI). In both cases, these are only guidelines and, as such, are not legally binding. Both sets of guidelines were compiled by experts in the field. Let’s have a closer look.

“Why do we need guidelines for trustworthy, ethical AI?” you may ask. Over the last years, there have been multiple calls, from experts, researchers, lawmakers and the judiciary to develop some kind of legal framework or guidelines for ethical AI.  Several cases have been in the news where the ethics of AI systems came into question. One of the problem areas is bias with regard to gender or race, etc. There was, e.g., the case of COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It turned out the system had a double racial bias, one in favour of white defendants, and one against black defendants. More recently, Amazon shelved its AI HR assistant because it systematically favoured male applicants. Another problem area is privacy, where there are concerns about deep learning / machine learning, and with technologies like, e.g., facial recognition.

In the case of the EU guidelines, another factor is at play as well. Both the US and China have a substantial lead over the EU when it comes to AI technologies. The EU saw its niche in trustworthy and ethical AI.

EU Guidelines

The EU guidelines were published by the EU Commission on 8 April 2019. (Before that, in December 2018, the European Parliament had already published a report in which it asked for a legal framework or guidelines for AI. The EU Parliament suggested AI systems should be broadly designed in accordance with The Three Laws of Robotics). The Commission stated that trustworthy AI should be:

  • lawful, i.e. respecting all applicable laws and regulations,
  • ethical, i.e. respecting ethical principles and values, and
  • robust, both from a technical perspective while taking into account its social environment.

To that end, the guidelines put forward a set of 7 key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall-back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should consider the environment, including other living beings, and their social and societal impact should be carefully considered.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

A pilot project will be launched later this year, involving the main stakeholders. It will review the proposal more thoroughly and provide feedback, upon which the guidelines can be finetuned. The EU also invites interested business to join the European AI Alliance.

OECD

The OECD consists of 36 members, approximately half of which are EU members. Non-EU members include the US, Japan, Australia, New Zealand, South-Korea, Mexico and others. On 22 May 2019, the OECD Member Countries adopted the OECD Council Recommendation on Artificial Intelligence. As is the case with the EU guidelines, these are recommendations that are not legally binding.

The OECD Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Consistent with these value-based principles, the OECD also provides five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

As you can see, many of the fundamental principles are similar in both sets of guidelines. And, as mentioned before, these EU and OECD guidelines are merely recommendations that are not legally binding. As far as the EU is concerned, at some point in the future, it may push through actual legislation that is based on these principles. The US has already announced it will adhere to the OECD recommendations.

 

Sources: