What Is Natural Language Processing (NLP)? Meaning, Techniques, and Models
The primary goal of natural language processing is to empower computers to comprehend, interpret, and produce human language.
- Since the days of the first commercial computers, natural language processing (NLP) has been a key goal of artificial intelligence (AI) research because it provides a natural, convenient interface between humans and machines.
- Now, we routinely speak to our devices, and NLP is everywhere.
- But what exactly is NLP? This article explains its meaning, techniques, models, and importance.
Table of Contents
Beginners Guide to Natural Language Processing (NLP)
Since the days of the first commercial computers, natural language processing (NLP) has been a key goal of artificial intelligence (AI) research because it provides a natural, convenient interface between humans and machines. Now, we routinely speak to our devices, and NLP is everywhere. But what exactly is NLP?
It’s easy to understand the importance of NLP, given the number of applications for it — question-and-answer (Q&A) systems, translation of text from one language to another, automatic summarization (of long texts into short summaries), grammar analysis and recommendation, sentiment analysis, and much more. This technology is even more important today, given the massive amount of unstructured data generated daily in the context of news, social media, scientific and technical papers, and various other sources in our connected world.
Today, when we ask Alexa or Siri a question, we don’t think about the complexity involved in recognizing speech, understanding the question’s meaning, and ultimately providing a response. Recent advances in state-of-the-art NLP models, BERT, and BERT’s lighter successor, ALBERT from Google, are setting new benchmarks in the industry and allowing researchers to increase the training speed of the models.
In the mid-1950s, IBM sparked tremendous excitement for language understanding through the Georgetown experiment, a joint development project between IBM and Georgetown University.
In the early years of the Cold War, IBM demonstrated the complex task of machine translation of the Russian language to English on its IBM 701 mainframe computer. Russian sentences were provided through punch cards, and the resulting translation was provided to a printer. The application understood just 250 words and implemented six grammar rules (such as rearrangement, where words were reversed) to provide a simple translation. At the demonstration, 60 carefully crafted sentences were translated from Russian into English on the IBM 701. The event was attended by mesmerized journalists and key machine translation researchers. The result of the event was greatly increased funding for machine translation work.
Unfortunately, the ten years that followed the Georgetown experiment failed to meet the lofty expectations this demonstration engendered. Research funding soon dwindled, and attention shifted to other language understanding and translation methods.
This trend is not foreign to AI research, which has seen many AI springs and winters in which significant interest was generated only to lead to disappointment and failed promises. The allure of NLP, given its importance, nevertheless meant that research continued to break free of hard-coded rules and into the current state-of-the-art connectionist models.
Learn More: Top 10 Python Libraries for Machine Learning
What Is Natural Language Processing?
Natural language processing is an umbrella term covering diverse fields that deal with the ability to automatically model and understand human language, which helps computers learn, analyze, and understand human language. Consider a system such as Alexa, an AI-based virtual assistant developed by Amazon. This system accepts voice as its input and then converts the voice into a model of human language. It attempts to understand the request (by decomposing the language into its fundamental parts), processes the request, and then provides a response.
Each step represents a different subfield within NLP: speech recognition, natural language understanding (NLU), natural language generation (NLG), and text to speech. Speech recognition and text to speech are more signal processing, but the inner two parts (NLU and NLG) represent key aspects of NLP.
The complex algorithms that convert speech to text break the text down to understand its meaning, create a response, and convert it to audio are all done remotely within a cloud (remote data center) focused on this service. The device endpoint in your home does little other than act as a conduit to the cloud. Two activities occur in the cloud to understand speech and then generate speech: NLU and NLG.
Natural language understanding
Natural language understanding is the capability to identify meaning (in some internal representation) from a text source. This definition is abstract (and complex), but NLU aims to decompose natural language into a form a machine can comprehend. This capability can then be applied to tasks such as machine translation, automated reasoning, and questioning and answering.
Natural language generation
Natural language generation is the ability to create meaning (in the context of human language) from a representation of information. This functionality can relate to constructing a sentence to represent some type of information (where information could represent some internal representation). In certain NLP applications, NLG is used to generate text information from a representation that was provided in a non-textual form (such as an image or a video).
Learn More: Facebook Talks Open-Source NLP in Device Debut
How Does Natural Language Processing Work?
NLP involves a series of steps that transform raw text data into a format that computers can process and derive meaning from. Following are the steps in detail with some examples.
Step 1: Tokenization
Tokenization involves splitting a piece of text into separate words or tokens. These tokens are the basic units that a computer can work with. For example, consider the sentence,
‘Natural Language Processing is fascinating!’
After tokenization, the sentence is divided into tokens
[‘Natural,’ ‘Language,’ ‘Processing,’ ‘is,’ ‘fascinating,’ ‘!’]
Step 2: Text cleaning
Text cleaning involves removing unnecessary characters, symbols, and formatting from the text. Common tasks include converting text to lowercase, removing punctuation, and dealing with special characters. For example, the tokenized sentence
[‘Natural,’ ‘Language,’ ‘Processing,’ ‘is,‘ ‘fascinating,’ ‘!’]
After text cleaning
[‘natural,’ ‘language,’ ‘processing,’ ‘is,‘ ‘fascinating’]
Step 3: Stopword removal
Stopwords are common words such as ‘the,’ ‘is,’ ‘and,’ ‘an’, ‘to,’ etc., which are needed to join sentences but don’t contribute much to the sentence’s overall meaning. They are often removed to reduce noise in the data. For example, after stopword removal, the sentence becomes
[‘natural,’ ‘language,’ ‘processing,’ ‘fascinating’]
Step 4: Part-of-speech (POS) tagging
POS tagging involves assigning a grammatical category (such as noun, verb, adjective, etc.) to each word in the text. This step helps understand the syntactic structure of the sentence. For example
[‘natural,’ ‘language,’ ‘processing,’ ‘fascinating’]
After POS tagging
[(‘natural,’ ‘ADJ’), (‘language,’ ‘NOUN’), (‘processing,’ ‘NOUN’), (‘fascinating’, ‘ADJ’)]
Step 5: Named entity recognition (NER)
NER identifies and classifies named entities, such as person names, locations, dates, etc., in the text. This step is crucial to understand the entities involved in the given text. For example
‘The company Google was founded by Larry Page and Sergey Brin in September 1998.’
After NER
[(‘The,’ ‘O’), (‘company,’ ‘O’), (‘Google,’ ‘ORG’), (‘was, ‘O’), (‘founded,’ ‘O’), (‘by,’ ‘O’), (‘Larry,’ ‘PERSON’), (‘Page,’ ‘PERSON’), (‘and,’ ‘O’), (‘Sergey,’ ‘PERSON’), (‘Brin’, ‘PERSON’), (‘in,’ ‘O’), (‘September,’ ‘DATE’), (‘1998’, ‘DATE’), (‘.,‘ ‘O’)]
Step 6: Sentiment analysis (0ptional)
Sentiment analysis is the process of determining the sentiment or emotion expressed in the text. This step can help analyze customer feedback, social media posts, and more. For example
‘Customer support was excellent, but the product quality needs improvement.’
Sentiment analysis output
Positive sentiment for ‘Customer support was excellent.’
Negative sentiment for ‘the product quality needs improvement.’
Step 7: Parsing and semantic analysis (optional)
Parsing involves analyzing the grammatical structure of a sentence to understand the relationships between words. Semantic analysis aims to derive the meaning of the text and its context. These steps are often more complex and can involve advanced techniques such as dependency parsing or semantic role labeling.
Step 8: Machine learning and NLP models
NLP models such as neural networks and machine learning algorithms are often used to perform various NLP tasks. These models are trained on large datasets and learn patterns from the data to make predictions or generate human-like responses. Popular NLP models include Recurrent Neural Networks (RNNs), Transformers, and BERT (Bidirectional Encoder Representations from Transformers).
Step 9: Natural language generation (optional)
Natural language generation (NLG) is the process of generating human-like text based on the insights gained from NLP tasks. NLG can be used in chatbots, automatic report writing, and other applications.
These are the basic steps involved in natural language processing. Depending on the complexity of the NLP task, additional techniques and steps may be required. NLP is a vast and evolving field, and researchers continuously work on improving the performance and capabilities of NLP systems.
Learn more: Why NLP is the Next Frontier in AI for Enterprises
NLP Languages and Libraries
The primary goal of NLP is to empower computers to comprehend, interpret, and produce human language. As language is complex and ambiguous, NLP faces numerous challenges, such as language understanding, sentiment analysis, language translation, chatbots, and more. To tackle these challenges, developers and researchers use various programming languages and libraries specifically designed for NLP tasks.
Let’s understand the key languages and libraries used for NLP tasks.
Programming languages for NLP
- Python: Due to its simplicity, user-friendly nature, and extensive collection of libraries and frameworks, Python is the most widely adopted programming language for NLP. Python’s extensive ecosystem makes it ideal for rapid prototyping and building NLP applications efficiently. Popular NLP libraries in Python include NLTK (Natural Language Toolkit), spaCy, Gensim, and the Transformers library.
- Java: Java is a widely used language for NLP applications, particularly in industries where performance, robustness, and scalability are crucial. Java-based NLP libraries like Stanford NLP and Apache OpenNLP provide tools like part-of-speech tagging, named entity recognition, and sentiment analysis.
- JavaScript: JavaScript is commonly employed for web-based NLP applications and chatbots, making it popular for customer service and website interactions. With Node.js, JavaScript can also be used for server-side NLP tasks.
- C++: C++ is preferred for computationally intensive NLP tasks that require high efficiency and performance. While not as developer-friendly as Python, C++ is widely used in areas such as large-scale language modeling.
- R: R is popular among statisticians and researchers for statistical NLP tasks and text mining. It provides various packages for data preprocessing, text analysis, and visualization.
- Scala: Scala’s compatibility with Java and functional programming style make it suitable for building NLP applications on Apache Spark, a distributed computing platform.
NLP libraries and frameworks
You can find several NLP tools and libraries to fit your needs regardless of language and platform. This section lists some of the most popular toolkits and libraries for NLP.
1. NLTK (Natural Language Toolkit)
The king of NLP is the Natural Language Toolkit (NLTK) for the Python language. NLTK is easy to set up and use. It includes a hands-on starter guide to help you use the available Python application programming interfaces (APIs). It covers most NLP algorithms that you’ll need. In many cases, for a given component, you’ll find many algorithms to cover it. For example, the TextBlob library, written for NLTK, is an open-source extension that provides machine translation, sentiment analysis, and several other NLP services.
2. spaCy
A competitor to NLTK is the spaCy library, also for Python. Although spaCy lacks the breadth of algorithms that NLTK provides, it offers a cleaner API and simpler interface. The spaCy library also claims to be faster than NLTK in some areas; however, it lacks the language support of NLTK.
3. R language and environment
The R language and environment is a popular data science toolkit that continues to grow in popularity. Like Python, R supports many extensions, called packages, that provide new functionality for R programs. In addition to providing bindings for Apache OpenNLP, packages exist for text mining, and there are tools for word embeddings, tokenizers, and various statistical models for NLP.
4. PyTorch-NLP
PyTorch-NLP is another library for Python designed for the rapid prototyping of NLP. PyTorch-NLP’s ability to implement deep learning networks, including the LSTM network, is a key differentiator. A similar offering is Deep Learning for Java, which supports basic NLP services (tokenization, etc.) and the ability to construct deep neural networks for NLP tasks.
5. Standard CoreNLP
Stanford CoreNLP is an NLTK-like library meant for NLP-related processing tasks. It’s a good choice when processing large amounts of data. Stanford CoreNLP provides chatbots with conversational interfaces, text processing and generation, and sentiment analysis, among other features.
6. Apache OpenNLP
Apache OpenNLP is a Java-based library for NLP ML. OpenNLP is an older library but supports some of the more commonly required services for NLP, including tokenization, POS tagging, named entity extraction, and parsing.
7. TensorFlow and Keras
TensorFlow, along with its high-level API Keras, is a popular deep learning framework used for NLP. It allows developers to build and train neural networks for tasks such as text classification, sentiment analysis, machine translation, and language modeling.
8. Scikit-learn
While primarily a machine learning library, Scikit-learn offers some tools for basic text classification and clustering, making it useful for simple NLP tasks.
9. WordNet
WordNet is a lexical database for the English language that provides synonyms, antonyms, and relationships between words. It is used for tasks such as word sense disambiguation and semantic similarity.
10. Gensim
Focusing on topic modeling and document similarity analysis, Gensim utilizes techniques such as Latent Semantic Analysis (LSA) and Word2Vec. This library is widely employed in information retrieval and recommendation systems.
As can be seen, NLP uses a wide range of programming languages and libraries to address the challenges of understanding and processing human language. The choice of language and library depends on factors such as the complexity of the task, data scale, performance requirements, and personal preference.
Learn more: Why Natural Language Processing Will Steer the AI Ship: Experts’ Take
Key NLP Techniques
NLP has advanced over time from the rules-based methods of the early period. The rules-based method continues to find use today, but the rules have given way to machine learning (ML) and more advanced deep learning approaches.
Key NLP Techniques
1. Rules-based methods
Rules-based approaches were some of the earliest methods used (such as in the Georgetown experiment), and they remain in use today for certain types of applications. In general, they are flexible and generally work well. Context-free grammars are a popular example of a rules-based approach.
Rules are commonly defined by hand, and a skilled expert is required to construct them. Like expert systems, the number of grammar rules can become so large that the systems are difficult to debug and maintain when things go wrong. Unlike more advanced approaches that involve learning, however, rules-based approaches require no training. Instead, they rely on rules that humans construct to understand language.
Rules-based approaches often imitate how humans parse sentences down to their fundamental parts. A sentence is first tokenized down to its unique words and symbols (such as a period indicating the end of a sentence). Preprocessing, such as stemming, then reduces a word to its stem or base form (removing suffixes like -ing or -ly). The resulting tokens are parsed to understand the structure of the sentence. Then, this parse tree is applied to pattern matching with the given grammar rule set to understand the intent of the request. The rules for the parse tree are human-generated and, therefore, limit the scope of the language that can effectively be parsed.
The major downside of rules-based approaches is that they don’t scale to more complex language. Nevertheless, rules continue to be used for simple problems or in the context of preprocessing language for use by more complex connectionist models.
2. Statistical methods
Statistical methods for NLP are defined as those that involve statistics and, in particular, the acquisition of probabilities from a data set in an automated way (i.e., they’re learned). This method obviously differs from the previous approach, where linguists construct rules to parse and understand language. In the statistical approach, instead of the manual construction of rules, a model is automatically constructed from a corpus of training data representing the language to be modeled.
An important example of this approach is a hidden Markov model (HMM). An HMM is a probabilistic model that allows the prediction of a sequence of hidden variables from a set of observed variables. In the case of NLP, the observed variables are words, and the hidden variables are the probability of a given output sequence.
Consider the sequence of words “What is the X?” An HMM trained on a corpus of data may have several options for X (perhaps it was an unintelligible word), given the sequence of words that preceded it. But if the application was a voice assistant, there’s a higher probability that X is “time.”
The HMM was also applied to problems in NLP, such as part-of-speech tagging (POS). POS tagging, as the name implies, tags the words in a sentence with its part of speech (noun, verb, adverb, etc.). POS tagging is useful in many areas of NLP, including text-to-speech conversion and named-entity recognition (to classify things such as locations, quantities, and other key concepts within sentences).
When two adjacent words are used as a sequence (meaning that one word probabilistically leads to the next), the result is called a bigram in computational linguistics. If the sequence is three words, then it’s called a trigram. These n-gram models are useful in several problem areas beyond computational linguistics and have also been used in DNA sequencing.
3. Connectionist methods
Connectionist methods rely on mathematical models of neuron-like networks for processing, commonly called artificial neural networks. In the last decade, however, deep learning models have met or exceeded prior approaches in NLP.
Deep learning models are based on the multilayer perceptron but include new types of neurons and many layers of individual neural networks that represent their depth. The earliest deep neural networks were called convolutional neural networks (CNNs), and they excelled at vision-based tasks such as Google’s work in the past decade recognizing cats within an image. But beyond toy problems, CNNs were eventually deployed to perform visual tasks, such as determining whether skin lesions were benign or malignant. Recently, these deep neural networks have achieved the same accuracy as a board-certified dermatologist.
Deep learning has found a home outside of vision-based problems. In fact, it has quickly become the de facto solution for various natural language tasks, including machine translation and even summarizing a picture or video through text generation (an application explored in the next section).
Other connectionist methods have also been applied, including recurrent neural networks (RNNs), ideal for sequential problems (like sentences). RNNs have been around for some time, but newer models, like the long–short-term memory (LSTM) model, are also widely used for text processing and generation.
Learn More: What is Deep Learning?
Top NLP Models
Language models serve as the foundation for constructing sophisticated NLP applications. AI and machine learning practitioners rely on pre-trained language models to effectively build NLP systems. These models employ transfer learning, where a model pre-trained on one dataset to accomplish a specific task is adapted for various NLP functions on a different dataset.
Prominent examples of large language models (LLM), such as GPT-3 and BERT, excel at intricate tasks by strategically manipulating input text to invoke the model’s capabilities.
Let’s look at some of the top NLP models:
1. BERT (Bidirectional Encoder Representations from Transformers)
BERT is a groundbreaking NLP pre-training technique Google developed. It leverages the Transformer neural network architecture for comprehensive language understanding. BERT is highly versatile and excels in tasks such as speech recognition, text-to-speech transformation, and any task involving transforming input sequences into output sequences. It demonstrates exceptional efficiency in performing 11 NLP tasks and finds exemplary applications in Google Search, Google Docs, and Gmail Smart Compose for text prediction.
2. ChatGPT-3
ChatGPT-3 is a transformer-based NLP model renowned for its diverse capabilities, including translations, question answering, and more. With recent advancements, it excels at writing news articles and generating code. What sets ChatGPT-3 apart is its ability to perform downstream tasks without needing fine-tuning, effectively managing statistical dependencies between different words. The model’s remarkable performance is attributed to its extensive training on over 175 billion parameters, drawing from a colossal 45 TB text corpus sourced from various internet sources.
3. OpenAI’s GPT-2
OpenAI’s GPT-2 is an impressive language model showcasing autonomous learning skills. With training on millions of web pages from the WebText dataset, GPT-2 demonstrates exceptional proficiency in tasks such as question answering, translation, reading comprehension, summarization, and more without explicit guidance. It can generate coherent paragraphs and achieve promising results in various tasks, making it a highly competitive model.
4. RoBERTa
RoBERTa, short for the Robustly Optimized BERT pre-training approach, represents an optimized method for pre-training self-supervised NLP systems. Built on BERT’s language masking strategy, RoBERTa learns and predicts intentionally hidden text sections. As a pre-trained model, RoBERTa excels in all tasks evaluated by the General Language Understanding Evaluation (GLUE) benchmark.
5. ALBERT
Google introduced ALBERT as a smaller and faster version of BERT, which helps with the problem of slow training due to the large model size. ALBERT uses two techniques — Factorized Embedding and Cross-Layer Parameter Sharing — to reduce the number of parameters. Factorized embedding separates hidden layers and vocabulary embedding, while Cross-Layer Parameter Sharing avoids too many parameters when the network grows.
6. XLNet
XLNet explores autoregression pre-training, allowing students to learn bidirectional context and overcome the limitations associated with models such as BERT that use denoising autoencoding techniques.
7. T5
T5, known as the Text-to-Text Transfer Transformer, is a potent NLP technique that initially trains models on data-rich tasks, followed by fine-tuning for downstream tasks. Google introduced a cohesive transfer learning approach in NLP, which has set a new benchmark in the field, achieving state-of-the-art results. The model’s training leverages web-scraped data, contributing to its exceptional performance across various NLP tasks.
8. ELECTRA
ELECTRA, short for Efficiently Learning an Encoder that Classifies Token Replacements Accurately, is a recent method used to train and develop language models. It’s good at accurately classifying token replacements. Instead of using MASK like BERT, ELECTRA efficiently reconstructs original words and performs well in various NLP tasks.
9. DeBERTa
DeBERTa, introduced by Microsoft Researchers, has notable enhancements over BERT, incorporating disentangled attention and an advanced mask decoder. The upgraded mask decoder imparts the decoder with essential information regarding both the absolute and relative positions of tokens or words, thereby improving the model’s ability to capture intricate linguistic relationships.
10. StructBERT
StructBERT is an advanced pre-trained language model strategically devised to incorporate two auxiliary tasks. These tasks exploit the language’s inherent sequential order of words and sentences, allowing the model to capitalize on language structures at both the word and sentence levels. This design choice facilitates the model’s adaptability to varying levels of language understanding demanded by downstream tasks.
Learn more: Top 10 Machine Learning Algorithms in 2022
Uses and Importance of NLP
Q&A systems are a prominent area of focus today, but the capabilities of NLU and NLG are important in many other areas. The initial example of translating text between languages (machine translation) is another key area you can find online (e.g., Google Translate). You can also find NLU and NLG in systems that provide automatic summarization (that is, they provide a summary of long-written papers).
NLU is useful in understanding the sentiment (or opinion) of something based on the comments of something in the context of social media. Finally, you can find NLG in applications that automatically summarize the contents of an image or video.
1. Machine translation
Machine translation has come a long way from the simple demonstration of the Georgetown experiment. Today, deep learning is at the forefront of machine translation. Because deep neural networks are numerically based, however, tokenized words to be translated are converted into a vector (a one-hot, where a single element of the vector signifies the word, or a word embedding, which encodes each word into a vector based on learned characteristics). This vector is then fed into an RNN that maintains knowledge of the current and past words (to exploit the relationships among words in sentences). Based on training data on translation between one language and another, RNNs have achieved state-of-the-art performance in the context of machine translation.
Based on training data on translation between one language and another, RNNs have achieved state-of-the-art performance in the context of machine translation.
2. Text summarization
Being able to create a shorter summary of longer text can be extremely useful given the time we have available and the massive amount of data we deal with daily. The RNN (specifically, an encoder-decoder model) is commonly used given input text as a sequence (with the words encoded using a word embedding) feeding a bidirectional LSTM that includes a mechanism for attention (i.e., where to apply focus).
This approach has exceeded the state of the art for text summarization. But it does have a downside: it doesn’t do well with words outside its vocabulary or behaviors such as repeating information.
3. Sentiment analysis
Sentiment analysis is the automated analysis of text to identify a polarity, such as good, bad, or indifferent. In social media, sentiment analysis means cataloging material about something like a service or product and then determining the sentiment (or opinion) about that object from the opinion. A more advanced version of sentiment analysis is called intent analysis. This version seeks to understand the intent of the text rather than simply what it says.
Early versions of sentiment analysis were basic and lacked nuance. Given a block of text, the algorithm counted the number of polarized words in the text; if there were more negative words than positive ones, the sentiment would be defined as negative. Depending on sentence structure, this approach could easily lead to bad results (for example, from sarcasm).
Deep learning has been found to be highly accurate for sentiment analysis, with the downside that a significant training corpus is required to achieve accuracy. The deep neural network learns the structure of word sequences and the sentiment of each sequence. Given the variable nature of sentence length, an RNN is commonly used and can consider words as a sequence. A popular deep neural network architecture that implements recurrence is LSTM.
4. Text mining
Unstructured data accounts for 80% of the data created daily. The ability to mine these data to retrieve information or run searches is important. Text mining refers to a broad field that encompasses a disparate set of capabilities for manipulating text, including concept/entity extraction (i.e., identifying key elements of a text), text categorization (i.e., labeling text with tag categories), and text clustering (i.e., grouping similar texts).
As a diverse set of capabilities, text mining uses a combination of statistical NLP methods and deep learning. With the massive growth of social media, text mining has become an important way to gain value from textual data.
5. Caption generation
A fascinating example of the power of deep learning is the generation of captions for images or videos — an ability that would have been thought out of reach a decade ago. Caption generation helps categorize photos and their contents for search.
Recall that CNNs were designed for images, so not surprisingly, they’re applied here in the context of processing an input image and identifying features from that image. These features output from the CNN are applied as inputs to an LSTM network for text generation.
Building a caption-generating deep neural network is both computationally expensive and time-consuming, given the training data set required (thousands of images and predefined captions for each). Without a training set for supervised learning, unsupervised architectures have been developed, including a CNN and an RNN, for image understanding and caption generation. Another CNN/RNN evaluates the captions and provides feedback to the first network.
Learn More: How to Build a Career in Artificial Intelligence and Machine Learning
Takeaway
NLP has evolved since the 1950s, when language was parsed through hard-coded rules and reliance on a subset of language. The 1990s introduced statistical methods for NLP that enabled computers to be trained on the data (to learn the structure of language) rather than be told the structure through rules. Today, deep learning has changed the landscape of NLP, enabling computers to perform tasks that would have been thought impossible a decade ago. Deep learning has enabled deep neural networks to peer inside images, describe their scenes, and provide overviews of videos. And the best is yet to come.
Did this article give you a clear idea about natural language processing? Share with us on LinkedIn, X, or Facebook. We’d love to hear from you!