The LLVM compiler infrastructure project is a powerful, versatile collection of modular technologies for constructing compilers and related tools. Since its inception, LLVM has grown into a rich ecosystem, providing a novel approach to compile-time, link-time, runtime, and “idle-time” compilation. It is used by a variety of programming languages, from industry staples like C and C++, to newer systems programming languages like Rust, to high-performance computing languages like Julia.
However, navigating the vast and complex world of LLVM can be challenging for both newcomers and experienced developers alike. That’s where “Mastering LLVM – A Comprehensive 10-Hour Course” comes in. This course is designed to provide an intensive, hands-on exploration of LLVM, equipping participants with the knowledge and skills they need to use this powerful technology effectively.
Over the span of 10 hours, we will dive into topics ranging from the basics of compilers, an overview of LLVM’s architecture, to more advanced topics such as customizing LLVM, and case studies of real-world applications. Each hour will focus on a specific aspect of LLVM, offering a mix of theoretical instruction and practical exercises to cement your understanding.
Whether you’re a student, a researcher, a hobbyist, or a professional developer, this course will offer valuable insights into the workings of LLVM. By the end of the course, you will not only have a deep understanding of LLVM, but also the confidence to apply this knowledge in your own projects.
Let’s embark on this journey together into the fascinating world of LLVM. Welcome to “Mastering LLVM – A Comprehensive 10-Hour Course”!
- [Further Reading] What is Quantum Computing? Master the Fundamentals in Just 5 Minutes
- [Further Reading] Unlock the Power of Artificial Intelligence and Machine Learning: 5 Steps to Getting Started
- [Further Reading] Is Quantum AI the Next Big Thing? Gain Insights from a Brief Introduction to 5 Game-Changing Quantum Algorithms for the Digital Revolution
Table of Contents
Hour 1 Handouts: Introduction to Compiler Basics and LLVM
Understanding Compilers: Frontend, Middle, Backend
Compilers: An Overview
- A compiler is a computer program that transforms code written in one programming language (the source language) into another language (the target language).
Stages of a Compiler
- Frontend: The first stage of a compiler. This stage understands the syntax and semantics of the language, checks for errors, and builds an Abstract Syntax Tree (AST).
- Lexical Analysis: Converts source code into a series of tokens.
- Syntax Analysis (Parsing): Converts the token series into an AST based on grammar rules.
- Semantic Analysis: Applies context-specific rules to the AST (variable binding, type checking, etc.).
- Middle (Intermediate Representation, IR): The second stage of a compiler. This stage translates the language-agnostic intermediate representation and performs optimizations.
- Translation: Converts the AST into a lower-level IR.
- Optimization: Performs transformations on the IR to improve code efficiency.
- Backend: The last stage of a compiler. This stage generates machine code or bytecode from the IR, performs optimizations that are specific to the target machine.
- Code Generation: Transforms the IR into machine or bytecode.
- Code Optimization: Makes final optimizations specific to the target architecture.
LLVM Introduction: History, Purpose, and Architecture
LLVM: History
- The LLVM project started as a research project at the University of Illinois, with the goal to provide a modern, SSA-based compilation strategy capable of supporting both static and dynamic compilation of arbitrary programming languages.
- LLVM was designed from the start to be a reusable library with well-defined interfaces, making it easy to add new capabilities and features.
LLVM: Purpose
- LLVM provides a series of modular compiler and toolchain technologies.
- Its flexibility makes it a great choice for a wide variety of tasks, such as enabling new languages and tools, improving existing ones, and performing research on new compilation strategies and optimizations.
LLVM: Architecture
- The central part of LLVM’s architecture is the Intermediate Representation (IR), a low-level programming language that serves as a common, neutral ground for both the frontend and backend stages.
- The LLVM IR is designed to host high-level types, like objects, classes, and methods, making it a universal, language-independent format.
- Frontends parse source code into LLVM IR, various transformations and optimizations are applied to the IR, and then backends generate machine code from the IR.
Hour 2 Handouts: Understanding LLVM IR (Intermediate Representation)
LLVM IR Basics: Static Single Assignment (SSA), Types, Values, Modules
LLVM IR: Overview
- The LLVM Intermediate Representation (IR) is a low-level programming language that serves as a common, neutral ground for both the frontend and backend stages of the compiler.
Static Single Assignment (SSA)
- LLVM IR uses Static Single Assignment (SSA) form, which means each variable is assigned exactly once and every variable is defined before it is used.
- This property greatly simplifies certain types of analyses and transformations.
Types in LLVM IR
Types in LLVM IR are a critical aspect of its static type system. They represent the kind of value and determine the operations that can be performed on values. Here’s a detailed look at the different types in LLVM:
- Integer Types: These types are used to represent integral numbers. They’re characterized by a non-negative bit width. For example,
i32
is a 32-bit integer type. - Floating-Point Types: These represent IEEE 754 floating-point numbers. Examples include
float
(32-bit floating-point),double
(64-bit floating-point), and others. - Boolean Type: This is a special integer type with a bit width of 1, often used to represent
true
orfalse
. For example,i1
. - Function Types: These are used to represent function signatures. A function type is defined by its return type and a list of parameter types. For instance,
i32(i32, i32)*
represents a function pointer to a function that takes two 32-bit integers as parameters and returns a 32-bit integer. - Structure Types: These represent a composition of a variety of types. They are similar to
structs
in C. For example,{ i32, i32, float }
is a structure with two integers and one floating point. - Array Types: These represent a sequence of elements of the same type. For example,
[10 x i32]
represents an array of 10 32-bit integers. - Pointer Types: These are used to reference or address a memory location. For instance,
i32*
is a pointer to a 32-bit integer. - Vector Types: These are SIMD vector types, similar to those used in graphic processors.
Values in LLVM IR
In LLVM IR, everything is a value, including functions and variables. Each value has a type. Values are the operands of the instructions. Here’s a brief explanation:
- Constants: These are fixed values like integer, floating-point, or null constants.
- Instructions: These are operations that consume and produce values. For example, an add instruction
add
is a value that produces the result of the addition operation. - Global Variables: These are variables declared at the module level.
- Function Arguments: These are inputs to functions, declared at the start of a function definition.
- Local Variables: These are the result of instructions and are always in SSA form.
Modules in LLVM IR
A Module is the highest-level structure in LLVM IR. It contains a list of global variables, functions, and symbol table entries. A Module can be likened to a translation unit in the C language. It serves as the entry point for many LLVM transformations, and all interaction with the code occurs through it.
Generating LLVM IR: Manual Creation and Tools
Manual Creation of LLVM IR
- LLVM IR can be written manually for testing and educational purposes. It’s a good exercise to understand its syntax and semantics.
Tools for Generating LLVM IR
- Typically, LLVM IR is generated by compiler frontends from higher-level languages. For example, Clang is used to generate LLVM IR from C/C++ code.
llvm-as
andllvm-dis
can be used to convert between human-readable (.ll) and bitcode (.bc) forms of LLVM IR.
Hour 3 Handouts: LLVM Frontend: Clang
Introduction to Clang: Features and Advantages
Clang: An Overview
- Clang is a compiler front end for the C, C++, Objective-C and Objective-C++ programming languages. It uses LLVM as its backend and has been part of the LLVM release cycle since LLVM 2.6.
Features of Clang
- Performance: Clang is designed to perform both at compile time and runtime, making it a great choice for applications where performance matters.
- Expressive Diagnostics: Clang provides rich and understandable error and warning messages. It can also generate fix-it hints, suggesting potential corrections for errors.
- Modular Design: Clang is designed to be able to reuse its components across multiple tools. This has enabled the development of many tools for tasks like refactoring and static analysis.
- Compatibility: Clang aims to support a broad range of C and C++ standards, and it strives for compatibility with GCC, MSVC, and other compilers.
Advantages of Clang
- Speed: Clang is known for its fast compile times and low memory usage.
- Clean and Simple Code: Clang has a simpler and more understandable codebase compared to other compilers.
- Static Analysis: Clang includes a static analyzer that checks code for common sources of errors.
- Cross Compilation: Clang makes it easier to cross-compile code for different architectures.
Working with Clang: Compilation, Error Messages, Debugging
Compilation with Clang
- Clang uses a simple command-line interface for compiling C/C++ code. For example,
clang my_program.c -o my_program
will compile the C source filemy_program.c
into an executable namedmy_program
.
Understanding Error Messages
- Clang provides clear and descriptive error messages. For instance, if you make a typo or use a function incorrectly, Clang will point out the error and often suggest a fix.
Debugging with Clang
- Clang can generate debugging information that can be used by debuggers like GDB or LLDB. This is usually done by adding the
-g
option to the compilation command, like so:clang -g my_program.c -o my_program
.
Hour 4 Handouts: LLVM Backends
Understanding Backends: What, Why, and How
What is a Backend?
- In the context of compilers, the backend is the component that takes the intermediate representation (IR) of the code and translates it into the machine code for a specific target architecture. The backend is responsible for optimizing this machine code for the target hardware.
Why are Backends Important?
- Backends are responsible for generating efficient machine code for the target hardware.
- They can apply hardware-specific optimizations that are impossible at higher levels of abstraction.
How Do Backends Work?
- The backend takes the IR and goes through a series of stages including machine-independent optimizations, target instruction selection, register allocation, and machine-dependent optimizations.
LLVM Backend Overview: Architecture, Code Generation
Architecture of LLVM Backend
- The LLVM backend consists of several parts, including the Target Description (defines the instruction set of the target machine), Instruction Selector (maps the IR to machine instructions), Register Allocator (manages the assignment of values to registers), and the Code Emitter (generates the final machine code).
LLVM Code Generation Process
- The LLVM backend process begins with SelectionDAG, a directed acyclic graph used to represent the computations required to produce a result. This representation is used to select instructions and perform certain optimizations.
- Then comes the Register Allocation phase where values are assigned to physical registers or stack locations.
- Finally, the instructions are scheduled and emitted in a format specific to the target machine.
Target-Specific Backends
- LLVM includes backends for a variety of architectures, including x86, ARM, MIPS, PowerPC, and more.
- It’s also possible to create a new LLVM backend for a custom or novel architecture.
Hour 5 Handouts: Code Optimization with LLVM
Understanding Optimization: Why and How
What is Optimization?
- Optimization refers to the process of modifying a system to make it work more efficiently. In the context of compilers, optimization involves transforming the program to improve its performance and/or reduce its resource usage without changing its behavior.
Why is Optimization Important?
- Optimization can lead to programs that execute faster, use less memory, or consume less power. This can be critical for applications with tight resource constraints or high performance requirements.
How is Optimization Done?
- Compiler optimizations are usually performed in two places: the frontend (language-specific optimizations) and the backend (target-specific optimizations). In between, LLVM performs machine-independent optimizations on the LLVM Intermediate Representation (IR).
LLVM Optimization Passes: Introduction and Usage
What is an Optimization Pass?
- A pass in LLVM is a modular unit of transformation or analysis on the program. An optimization pass is a type of pass that transforms the program to improve its performance or resource usage.
Common LLVM Optimization Passes
- LLVM includes a large number of optimization passes. Here are a few examples:
- Instruction Combining Pass: Simplifies the IR by combining instructions.
- Dead Code Elimination: Removes code that does not affect the program’s output.
- Constant Propagation: Replaces variables known to be constant with their actual values.
- Function Inlining: Replaces calls to small functions with the body of the function, eliminating the overhead of the function call.
Using Optimization Passes in LLVM
- LLVM provides the
opt
tool to apply optimization passes to LLVM IR. For example,opt -O2 my_program.ll -o my_program_opt.ll
will apply a set of optimization passes equivalent to-O2
in GCC or Clang.
Pass Managers
- LLVM uses a framework called a pass manager to schedule and run the various passes that perform transformations and analyses on the code. This system helps ensure passes are run in an efficient order, while also handling dependencies between passes. There are three types of pass managers in LLVM: ModulePassManager, FunctionPassManager, and LoopPassManager. Each operates at a different level of granularity:
- ModulePassManager: A module pass operates on the whole LLVM module (which you can think of as a single compilation unit or the whole program). It can analyze and transform inter-procedural data (across multiple functions). For instance, an optimization pass that performs inter-procedural constant propagation would be a module pass.
- FunctionPassManager: A function pass operates on a single function within the module. Most LLVM optimization passes are function passes, as many transformations and analyses are most conveniently expressed at the function level. Examples include passes that simplify the control flow graph of a function, or passes that perform function-level constant propagation.
- LoopPassManager: A loop pass operates on a single loop within a function. Loop passes are useful for transformations and analyses that need to understand the structure and behavior of loops in the program. For example, an optimization pass that performs loop invariant code motion (moving computations that are constant within the loop outside of the loop) would be a loop pass.
Here’s a simple example of how you might set up and run a FunctionPassManager in C++:
#include "llvm/IR/Function.h"
#include "llvm/IR/PassManager.h"
#include "llvm/Transforms/Scalar/SimplifyCFG.h"
void optimizeFunction(llvm::Function *F) {
llvm::FunctionPassManager FPM;
// Add some passes to the pass manager.
FPM.addPass(llvm::SimplifyCFGPass());
// Run the pass manager on the function.
FPM.run(*F);
}
In this example, we create a FunctionPassManager, add a SimplifyCFGPass to it, and then run the pass manager on a specific function. The SimplifyCFGPass is a pass that simplifies the control flow graph of a function by merging basic blocks, eliminating unnecessary branches, etc.
Hour 6 Handouts: Practical Session: Building a Simple Compiler with LLVM
Designing a Simple Language: Syntax, Semantics, Data Types
Simple Language Design
In this practical session, we’ll be building a compiler for a simple arithmetic language, which we’ll call MiniCalc. MiniCalc supports the four basic arithmetic operations (+
, -
, *
, /
), parentheses for grouping, and integer literals. Here’s a sample MiniCalc program:
scssCopy code(1 + 2) * (3 + 4)
This program computes the result of 3 * 7
, which is 21
.
Syntax, Semantics, Data Types
MiniCalc’s syntax is defined as follows:
- A program consists of an expression.
- An expression is either an integer literal, an expression followed by an operator and another expression, or an expression enclosed in parentheses.
MiniCalc’s semantics are straightforward:
- The operators perform their usual arithmetic operations.
- Parentheses can be used to change the order of operations.
MiniCalc has only one data type: integers.
Building the Compiler Frontend: Lexical Analysis, Parsing
Building the compiler for MiniCalc involves several steps:
- Lexical Analysis (Lexing): The lexer takes a string of characters (the source code) and breaks it up into a series of tokens, each of which represents a logical chunk of the program, such as a number, an operator, or a parenthesis.
- Parsing: The parser takes the stream of tokens produced by the lexer and builds an abstract syntax tree (AST), a data structure that represents the structure of the program. Each node in the AST represents a construct in the source code.
Here’s a simple example of how you might build a lexer and parser for MiniCalc using LLVM’s Lexer and Parser libraries:
// Lexer
std::vector<Token> lex(const std::string &input) {
std::vector<Token> tokens;
llvm::StringRef str(input);
while (!str.empty()) {
// Skip whitespace.
if (isspace(str.front())) {
str = str.drop_front();
continue;
}
// Parse number.
if (isdigit(str.front())) {
// Read until non-digit character.
llvm::StringRef numStr = str.take_while(isdigit);
int num = std::stoi(numStr);
tokens.push_back(Token::createNum(num));
str = str.drop_front(numStr.size());
}
// Parse operator or parenthesis.
else {
char c = str.front();
tokens.push_back(Token::createOp(c));
str = str.drop_front();
}
}
return tokens;
}
// Parser
ExprAST *parse(std::vector<Token> &tokens) {
// Implementation left as an exercise.
}
In this course, we’ll dive deeper into these topics, building the lexer and parser step by step, and finally generating LLVM IR that represents the MiniCalc program.
Hour 7 Handouts: Continuation of Practical Session: Building a Simple Compiler with LLVM
Building the Compiler Backend: Semantic Analysis, Code Generation
Now that we have a lexer and a parser for our MiniCalc language, it’s time to move onto the backend part of our compiler. The backend will perform semantic analysis and code generation.
Semantic Analysis
Semantic analysis is the phase of a compiler where the abstract syntax tree (AST) is checked to ensure that the program has the correct semantics – that is, that it makes sense according to the rules of the language. Since our language MiniCalc is relatively simple, there’s not much semantic analysis to do – we don’t have any variables or functions, so we don’t need to worry about scope or type checking. However, for a more complex language, this step would be essential.
Code Generation
In the code generation phase, the compiler transforms the AST into LLVM intermediate representation (IR). The LLVM IR can then be compiled into machine code for the target architecture. Here’s a basic example of what code generation might look like for MiniCalc:
llvm::Value *NumExprAST::codegen() {
// For a number, just return a constant integer.
return llvm::ConstantInt::get(llvm::getGlobalContext(), llvm::APInt(32, num));
}
llvm::Value *BinaryExprAST::codegen() {
// Codegen the left and right subexpressions.
llvm::Value *L = lhs->codegen();
llvm::Value *R = rhs->codegen();
// Perform the binary operation.
switch (op) {
case '+': return builder.CreateAdd(L, R);
case '-': return builder.CreateSub(L, R);
case '*': return builder.CreateMul(L, R);
case '/': return builder.CreateSDiv(L, R);
default: return nullptr; // Unknown binary operator.
}
}
In this example, we’re implementing the codegen
method for two types of AST nodes: NumExprAST
, which represents a number, and BinaryExprAST
, which represents a binary operation.
Testing and Debugging the Compiler
Testing and debugging are critical parts of developing a compiler. You’ll want to test your compiler with a variety of inputs to make sure it handles all the edge cases correctly.
For debugging, LLVM provides a number of tools that can help. For example, you can use the llvm-dis
tool to pretty-print LLVM IR in a human-readable form, which can be very helpful for understanding what your compiler is doing. If your compiler is crashing or producing incorrect results, the llvm-debug
tool can be used to debug it.
LLVM provides several tools and techniques that can be used to debug issues in your compiler, or to understand the LLVM IR your compiler is generating. Here are a few examples:
- LLDB: This is the LLVM project’s debugger. If your compiler is crashing, you can use LLDB in a similar way to how you would use gdb to debug a C++ program. For example, you can run your compiler under LLDB, set breakpoints, step through the code, and inspect variables to understand what’s going wrong.
- LLVM IR Debugging: If your compiler is generating incorrect LLVM IR, there are a few techniques you can use to understand the problem. One is to simply print out the LLVM IR your compiler is generating, using LLVM’s
dump()
methods or by writing the IR out to a file. You can then inspect this output manually to see if it matches what you expect. - llvm-dis: This tool converts LLVM bitcode into human-readable LLVM assembly language. This can be useful to understand what your compiler is generating, especially if you’re working with the binary bitcode format.
- llvm-opt: This tool runs various LLVM optimization passes on the input LLVM IR. If an optimization pass is causing your program to produce incorrect results, you can use llvm-opt to run individual passes and see which one causes the problem.
- llvm-as and llvm-lit: These tools are used for assembling LLVM IR and running the LLVM test suite, respectively. You can use these tools to write tests for your compiler and ensure that it’s generating correct and efficient code.
- Debugging information: LLVM supports generating DWARF debugging information, which can be used by a debugger to step through the original source code of a program. If your compiler supports generating debug info, you can use this feature to debug the original source code of the programs you’re compiling.
Hour 8 Handouts: Advanced Topics – Customizing LLVM
Writing Your Own Optimization Passes
The LLVM framework allows developers to create their own optimization passes that can perform transformations on LLVM IR. These passes can be used to implement language-specific optimizations, perform analyses, and generally extend the capabilities of the LLVM compiler.
An LLVM pass is simply a C++ class that overrides certain methods defined by LLVM. The most important of these is the runOnFunction
method, which is called for each function in the program. Here’s a very simple example of what an LLVM pass might look like:
struct MyPass : public llvm::FunctionPass {
static char ID;
MyPass() : FunctionPass(ID) {}
virtual bool runOnFunction(llvm::Function &F) override {
// This is where you'd put your code to analyze or transform F.
}
};
In this course, we’ll look at how to define, implement, and use custom optimization passes in more detail.
Extending LLVM: Adding New IR Instructions, New Backends
LLVM is designed to be a highly flexible and extensible compiler framework, and it provides several mechanisms for extending its capabilities. Two of the most significant ways you can customize LLVM are by adding new IR instructions and by creating new backends.
Adding New IR Instructions
Sometimes, you might want to add new instructions to LLVM IR to better support your source language or target architecture. LLVM provides a mechanism for defining custom IR instructions. This involves defining a new class that inherits from one of LLVM’s base instruction classes and implementing the required methods.
Creating New Backends
If you’re targeting a machine architecture that LLVM doesn’t currently support, you can write a new backend to generate code for that architecture. A backend in LLVM is responsible for transforming LLVM IR into machine code. This is a complex task that involves understanding the details of the target architecture, including its instruction set, calling conventions, register allocation strategy, and more.
Hour 9 Handouts: LLVM Tools and Ecosystem
LLVM Tools
The LLVM Project provides a range of tools that can be used to develop, debug, and optimize LLVM-based compilers. Here are some of the key tools:
- lldb: This is the debugger from the LLVM project. It provides functionalities similar to gdb but with a focus on LLVM and Clang. It can be used to debug programs written in any language that LLVM supports.
- llc: The LLVM static compiler. It takes LLVM bitcode or LLVM assembly language, and compiles it into assembly code for a specified architecture.
- lli: The LLVM interpreter. This can be used to execute LLVM bitcode directly, which can be useful for testing and debugging.
- opt: The LLVM optimizer. This tool takes LLVM bitcode, runs a series of optimization passes on it, and outputs optimized bitcode. It can be used to experiment with different optimization strategies and see their effect on the generated code.
- llvm-dis: This tool takes LLVM bitcode and translates it into human-readable LLVM assembly language.
These tools provide a wealth of functionality for working with LLVM, from debugging and testing to performance tuning and experimentation.
The LLVM Ecosystem: Related Projects, Community, Resources
Related Projects
LLVM is more than just a compiler framework. It’s the foundation for a range of related projects:
- Clang: This is a compiler for the C family of languages (C, C++, Objective-C, and Objective-C++) based on LLVM.
- LLD: This is the LLVM project’s linker. It aims to be faster and more flexible than traditional linkers.
- libc++ and libc++ ABI: These projects provide a standard-conformant and high-performance implementation of the C++ Standard Library, targeting C++11 and above.
Community
The LLVM community is a vibrant, global community of developers and users. There are regular LLVM Developers’ Meetings in both the United States and Europe, as well as smaller, local events around the world. The community communicates through a variety of mailing lists, IRC channels, and the LLVM Discourse forum.
Resources
There are a wealth of resources available for learning more about LLVM, from official documentation and tutorials to blog posts, presentations, and academic papers. Some good places to start are:
- The LLVM website (www.llvm.org)
- The LLVM documentation (llvm.org/docs/)
- The LLVM blog (blog.llvm.org)
- The LLVM YouTube channel (youtube.com/user/llvmorg)
Hour 10 Handouts: Case Studies and Best Practices
Real-world LLVM Use Cases: Projects Successfully Using LLVM
There are numerous projects in both industry and academia that successfully use LLVM. Some notable examples include:
- Clang: As mentioned before, Clang is a compiler front end for the C, C++, and Objective-C programming languages. It uses LLVM as its backend and has been noted for its exceptionally clear and expressive diagnostics.
- Swift: Swift is a powerful and intuitive programming language developed by Apple for iOS, Mac, Apple TV, and Apple Watch. It’s designed to give developers more freedom than ever. Swift is easy to use and open source, and it utilizes LLVM for code compilation.
- Rust: Rust is a systems programming language that runs blazingly fast, prevents segmentation faults, and guarantees thread safety. Rust also uses LLVM as a backend for code generation.
- Julia: Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia utilizes LLVM for JIT compilation.
Best Practices for Using LLVM: Coding Standards, Performance, Security
When using LLVM in your projects, consider the following best practices:
- Coding Standards: LLVM has its own coding standards that differ in some respects from other common C++ coding standards. Following these can make your code easier to understand and maintain for other LLVM developers.
- Performance: LLVM is designed to generate high-performance code, but like any tool, it can be used effectively or ineffectively. Understanding the cost model that LLVM uses for optimizations can help you write code that LLVM can optimize effectively.
- Security: Like all software, compilers can have security vulnerabilities. Be mindful of potential security issues such as integer overflows, buffer overflows, and undefined behavior in your LLVM code. LLVM includes sanitizers that can help find these issues in your code.
- Testing: Robust testing is essential for any compiler project. LLVM includes a comprehensive testing infrastructure, and adding thorough tests for any new features or changes you make can save a lot of trouble down the line.
- Community Interaction: The LLVM community is a resource. Interacting effectively with the community, through the mailing lists, bug tracker, code reviews, and developer meetings, can help you get help when you need it and can make your contributions more valuable to the project.
- [Further Reading] Valgrind | Detect and Eliminate Memory Leaks: How to Optimize Performance with Valgrind in Just 8 Minutes
- [Further Reading] Unlock the Power of Artificial Intelligence and Machine Learning: 5 Steps to Getting Started
- [Further Reading] Maximize Code Quality with Google Test: 4 Simple Google Test Examples to Get Started
- [Further Reading] Achieving 100% Code Coverage with Python Unittest Library for Reliable Programs