Browse by Tags

We've categorized the glossary terms. Use the filters to browse terms by tag.

Abstraction

In the context of computing, an abstraction is a representation that hides specifics from a consumer of services (a consumer being a computer program or human), making a system more generic and thus easily understood. A good example is your laptop’s operating system (OS). It abstracts away all the details of how your computer works. You don’t need to know anything about CPU, memory, and how programs are handled, you just operate the OS and the OS deals with the details...

Agile Software Development

A set of practices that emphasize iterative development cycles and self-organizing teams. In contrast to waterfall-like projects where value is generated only at the very end of a project, agile software development focuses on a continuous, incremental delivery of value and evolutionary improvement of the process itself. Problem it addresses Defining, communicating and understanding requirements for all stakeholders in a software project is very difficult, if not impossible. Yet, customers want their software projects to be delivered on time, in good quality, on budget and on scope...

API Gateway

An API gateway is a tool that aggregates unique application APIs, making them all available in one place. It allows organizations to move key functions, such as authentication and authorization or limiting the number of requests between applications, to a centrally managed location. An API gateway functions as a common interface to (often external) API consumers. Problem it addresses If you’re making APIs available to external consumers, you’ll want one entry point to manage and control all access...

Application Programming Interface (API)

An API is a way for computer programs to interact with each other. Just as humans interact with a website via a web page, an API allows computer programs to interact with each other. Unlike human interactions, APIs have limitations on what can and cannot be asked of them. The limitation on interaction helps to create stable and functional communication between programs. Problem it addresses As applications become more complex, small code changes can have drastic effects on other functionality...

Autoscaling

Autoscaling is the ability of a system to scale automatically, typically, in terms of computing resources. With an autoscaling system, resources are automatically added when needed and can scale to meet fluctuating user demands. The autoscaling process varies and is configurable to scale based on different metrics, such as memory or process time. Managed cloud services are typically associated with autoscaling functionality as there are more options and implementations available than most on-premise deployments...

Bare Metal Machine

Bare metal refers to a physical computer, specifically a server, that has one and only one operating system. The distinction is important in modern computing because many, if not most, servers are virtual machines. A physical server is typically a fairly large computer with powerful hardware built-in. Installing an operating system and running applications directly on that physical hardware, without virtualization, is referred to as running on “bare metal.” Problem it addresses Pairing one operating system with one physical computer is the original pattern of computing...

Blue Green Deployment

Blue-green deployment is a strategy for updating running computer systems with minimal downtime. The operator maintains two environments, dubbed “blue” and “green”. One serves production traffic (the version all users are currently using), whilst the other is updated. Once testing has concluded on the non-active (green) environment, production traffic is switched over (often via the use of a load balancer). Note that blue-green deployment usually means switching the entire environments, comprising many services, all at once...

Canary Deployment

Canary deployments is a deployment strategy that starts with two environments: one with live traffic and the other containing the updated code without live traffic. The traffic is gradually moved from the original version of the application to the updated version. It can start by moving 1% of live traffic, then 10%, 25%, and so on, until all traffic is running through the updated version. Organizations can test the new version of the software in production, get feedback, diagnose errors, and quickly rollback to the stable version if necessary...

Chaos Engineering

Chaos Engineering or CE is the discipline of experimenting on a distributed system in production to build confidence in the system’s capability to withstand turbulent and unexpected conditions. Problem it addresses SRE and DevOps practices focus on techniques to increase product resiliency and reliability. A system’s ability to tolerate failures while ensuring adequate service quality is typically a software development requirement. There are several aspects involved that could lead to outages of an application, like infrastructure, platform or other moving parts of a (microservice-based) application...

Client-Server Architecture

In a client-server architecture, the logic (or code) that makes up an application is split between two or more components: a client that asks for work to be done (e.g. the Gmail web application running in your web browser), and one or more servers that satisfy that request (e.g. the “send email” service running on Google’s computers in the cloud). In this example, outgoing emails that you write are sent by the client (web application running in your web browser) to a server (Gmail’s computers, which forward your outgoing emails to their recipients)...

Cloud Computing

Cloud computing offers CPU power, storage, and network capabilities, enabling scalable and flexible access to resources across global data centers. It spans private clouds, dedicated to single organizations for security and control, and public clouds, open for widespread use, optimizing cost and scalability. Problem it addresses Traditionally, organizations needing more computing capacity had to choose between costly investments in new server facilities or upgrades to existing infrastructure, a slow and resource-heavy process...

Cloud Native Apps

Cloud native applications are specifically designed to take advantage of innovations in cloud computing. These applications integrate easily with their respective cloud architectures, taking advantage of the cloud’s resources and scaling capabilities. It also refers to applications that take advantage of innovations in infrastructure driven by cloud computing. Cloud native applications today include apps that run in a cloud provider’s datacenter and on cloud native platforms on-premise. Problem it addresses Traditionally, on-premise environments provided compute resources in a fairly bespoke way...

Cloud Native Glossary

Cloud Native Glossary The Cloud Native Glossary aims to make the cloud native space — which is notorious for its complexity — simpler for people by making it easier to understand, not only for technologists but also for people on the business side. To achieve that, we focus on simplicity (e.g., simple language free from buzzwords, examples anyone using technology can relate to, leaving unnecessary details out). The Glossary is a project led by the CNCF Business Value Subcommittee (BVS)...

Cloud Native Security

Cloud native security is an approach that builds security into cloud native applications. It ensures that security is part of the entire application lifecycle from development to production. Cloud native security seeks to ensure the same standards as traditional security models while adapting to the particulars of cloud native environments, namely rapid code changes and highly ephemeral infrastructure. Cloud native security is highly related to the practice called DevSecOps. Problem it addresses Traditional security models were built with a number of assumptions that are no longer valid...

Cloud Native Technology

Cloud native technologies, also referred to as the cloud native stack, are the technologies used to build cloud native applications. These technologies enable organizations to build and run scalable applications in modern and dynamic environments such as public, private, and hybrid clouds, while leveraging cloud computing benefits to their fullest. They are designed from the ground up to exploit the capabilities of cloud computing and containers, service meshes, microservices, and immutable infrastructure exemplify this approach...

Cluster

A cluster is a group of computers or applications that work together towards a common goal. In the context of cloud native computing, the term is most often applied to Kubernetes. A Kubernetes cluster is a set of services (or workloads) that run in their own containers, usually on different machines. The collection of all these containerized services, connected over a network, represent a cluster. Problem it addresses Software that runs on a single computer presents a single point of failure — if that computer crashes, or someone accidentally unplugs the power cable, then some business-critical system may be taken offline...

Container Orchestration

Container orchestration refers to managing and automating the lifecycle of containerized applications in dynamic environments. It’s executed through a container orchestrator (in most cases, Kubernetes), which enables deployments, (auto)scaling, auto-healing, and monitoring. Orchestration is a metaphor: The orchestration tool conducts containers like a music conductor, ensuring every container (or musician) does what it should. Problem it addresses Managing microservices, security, and network communication at scale — and distributed systems in general — is hard, if not impossible, to manage manually...

Containerization

Containerization is the process of packaging of application code including libraries and dependencies required to run the code into a single lightweight executable—called container image. Problem it addresses Before containers became prevalent, organizations relied on virtual machines (VMs) to orchestrate multiple applications on a single bare-metal machine. VMs are significantly larger than containers and require a hypervisor to run. Due to the storage, backup, and transfer of these larger VM templates, creating the VM templates is also slow...

Containers

A container is a running process with resource and capability constraints managed by a computer’s operating system. The files available to the container process are packaged as a container image. Containers run adjacent to each other on the same machine, but typically the operating system prevents the separate container processes from interacting with each other. Problem it addresses Before containers were available, separate machines were necessary to run applications. Each machine would require its own operating system, which takes CPU, memory, and disk space, all for an individual application to function...

Continuous Delivery (CD)

Continuous delivery, often abbreviated as CD, is a set of practices in which code changes are automatically deployed into an acceptance environment (or, in the case of continuous deployment, into production). CD crucially includes procedures to ensure that software is adequately tested before deployment and provides a way to rollback changes if deemed necessary. Continuous integration (CI) is the first step towards continuous delivery (i.e., changes have to merge cleanly before being tested and deployed)...

Continuous Deployment (CD)

Continuous deployment, often abbreviated as CD, goes a step further than continuous delivery by deploying finished software directly to production. Continuous deployment (CD) goes hand in hand with continuous integration (CI), and is often referred to as CI/CD. The CI process tests if the changes to a given application are valid, and the CD process automatically deploys the code changes through an organization’s environments from test to production. Problem it addresses Releasing new software versions can be a labor-intensive and error-prone process...

Continuous integration (CI)

Continuous integration, often abbreviated as CI, is the practice of integrating code changes as regularly as possible. CI is a prerequisite for continuous delivery (CD). Traditionally, the CI process begins when code changes are committed to a source control system (Git, Mercurial, or Subversion) and ends with a tested artifact ready to be consumed by a CD system. Problem it addresses Software systems are often large and complex, with numerous developers maintaining and updating them...

Contributor Ladder

<p>Hi there! 👋 Thanks for your interest in contributing to the CNCF Cloud Native Glossary project. Whether you contribute new terms, help localize the Glossary into your native language, or want to help others get started, there are many ways to become an active member of this community. This doc outlines the different contributor roles within the project and the responsibilities and privileges that come with them.</p> <ol> <li>Contributors The Glossary is for everyone.</li> </ol>..

Datacenter

A datacenter is a specialized building or facility designed to house computers, most often servers. These datacenters tend to be connected to high-speed internet lines, especially when focused on cloud computing. The buildings housing datacenters are equipped to maintain service even during adverse events, including generators that provide power during outages and powerful air conditioning that keep the heat-producing computers cool. Problem it addresses Before datacenters became prevalent in the late 1990ies, there were mainly individual computers with specific tasks or those used by individuals to do their work...

DevOps

DevOps is a methodology in which teams own the entire process from application development to production operations, hence DevOps. It goes beyond implementing a set of technologies and requires a complete shift in culture and processes. DevOps calls for groups of engineers that work on small components (versus an entire feature), decreasing handoffs – a common source of errors. Problem it addresses Traditionally, in complex organizations with tightly-coupled monolithic apps, work was generally fragmented between multiple groups...

DevSecOps

The term DevSecOps refers to a cultural merger of the development, operational, and security responsibilities. It extends the DevOps approach to include security priorities with minimal to no disruption in the developer and operational workflow. Like DevOps, DevSecOps is a cultural shift, pushed by the technologies adopted, with unique adoption methods. Problem it addresses DevOps practices include continuous integration, continuous delivery, and continuous deployment and accelerate application development and release cycles...

Distributed Apps

A distributed application is an application where the functionality is broken down into multiple smaller independent parts. Distributed applications are usually composed of individual microservices that handle different concerns within the broader application. In a cloud native environment, the individual components typically run as containers on a cluster. Problem it addresses An application running on one single computer represents a single point of failure — if that computer fails, the application becomes unavailable...

Distributed System

A distributed system is a collection of autonomous computing elements connected over a network that appears to users as a single coherent system. Generally referred to as nodes, these components can be hardware devices (e.g. computers, mobile phones) or software processes. Nodes are programmed to achieve a common goal and, to collaborate, they exchange messages over the network. Problem it addresses Numerous modern applications today are so big they’d need supercomputers to operate...

eBPF

eBPF, or extended Berkeley Packet Filter, is a technology that allows small, sandboxed programs or scripts to run in the kernel space of a Linux system without having to change the kernel’s source code or load Linux kernel modules. A Linux system has two spaces: the kernel and the user space. The kernel represents the operating system’s core and is the only part with unlimited access to the hardware. Applications reside in the user space, and when they need higher permissions, they send a request to the kernel...

Edge Computing

Edge computing is a distributed system approach that shifts some storage and computing capacity from the primary data center to the data source. The gathered data is computed locally (e.g., on a factory floor, in a store, or throughout a city) rather than sent to a centralized data center for processing and analysis. These local processing units or devices represent the system’s edge, whereas the data center is its center. The output computed at the edge is then sent back to the primary data center for further processing...

Event Streaming

Event streaming is an approach where software sends event data from one application to another to continuously communicate what they are doing. Picture a service broadcasting everything it does to all other services. Each activity taken by a service is referred to as an event, hence event streaming. For example, NASDAQ gets updates on stock and commodities pricing every second. If you had an application that monitored a specific set of stocks, you would want to receive that information in near real-time...

Event-Driven Architecture

Event-driven architecture is a software architecture that promotes the creation, processing, and consumption of events. An event is any change to an application’s state. For example, hailing a ride on a ride-sharing app represents an event. This architecture creates the structure in which events can be properly routed from their source (the app requesting a ride) to the desired receivers (the apps of available drivers nearby). Problem it addresses As more data becomes real-time, finding reliable ways to ensure that events are captured and routed to the appropriate service that must process event requests gets increasingly challenging...

Function as a Service (FaaS)

Function as a Service (FaaS) is a cloud computing model that provides a platform for executing event-triggered functions, allowing for automatic scaling without manual intervention. At its essence, FaaS enables the deployment of individual functions that are activated by specific events, operate on a short-term basis, and then shut down, ensuring resources are not wasted. This model supports an autoscaling feature, enabling a function instance to be initiated per request and terminated post-execution, emphasizing its stateless nature...

Horizontal Scaling

Horizontal scaling is a technique where a system’s capacity is increased by adding more nodes versus adding more compute resources to individual nodes (the latter being known as vertical scaling). Let’s say, we have a system of 4GB RAM and want to increase its capacity to 16GB RAM. Scaling it horizontally means doing so by adding 3 nodes x 4GB RAM rather than switching to a 16GB RAM system. This approach enhances the performance of an application by adding new instances, or nodes, to better distribute the workload...

How To Contribute

Welcome Welcome to the Cloud Native Glossary contributing guide, and thank you for your interest. There are a number of ways you can contribute to this project, which we’ll cover in detail here: Work on an existing issue Propose new terms Update existing ones Localize the glossary CNCF glossary overview The goal of this glossary is to simplify the cloud native space — which is notorious for its complexity — and thus make it more accessible to people...

Idempotence

In maths or computer science, idempotence describes an operation that always leads to the same outcome, no matter how many times you execute it. If the parameters are the same, executing an idempotent operation several times will have no additional effect...

Immutable Infrastructure

Immutable Infrastructure refers to computer infrastructure (virtual machines, containers, network appliances) that cannot be changed once deployed. This can be enforced by an automated process that overwrites unauthorized changes or through a system that won’t allow changes in the first place. Containers are a good example of immutable infrastructure because persistent changes to containers can only be made by creating a new version of the container or recreating the existing container from its image...

Infrastructure as a Service (IaaS)

Infrastructure as a service, or IaaS, is a cloud computing service model that offers physical or virtualized compute, storage, and network resources on-demand on a pay-as-you-go model. Cloud providers own and operate the hardware and software, available to consumers in public, private, or hybrid cloud deployments. Problem it addresses In traditional on-premise setups, organizations often struggle with effective computing resource usage. Data centers have to be built for potential peak demand, even if it’s only needed 1% of the time...

Infrastructure as Code (IaC)

Infrastructure as code is the practice of storing the definition of infrastructure as one or more files. This replaces the traditional model where infrastructure as a service is provisioned manually, usually through shell scripts or other configuration tools. Problem it addresses Building applications in a cloud native way requires infrastructure to be disposable and reproducible. It also needs to scale on-demand in an automated and repeatable way, potentially without human intervention...

Ingress

An Ingress is a set of rules that helps to manage internet traffic from outside into a container or a group of containers running in a cluster. It consists of two elements: the ingress resource and the ingress controller. The ingress resource is a configuration file that lives along with other manifest files and allows admins to configure the external traffic routing. The ingress controller is the web server technology that actually performs the routing of the traffic according to the configuration in the ingress resource...

Kubernetes

Kubernetes, often abbreviated as K8s, is an open source container orchestrator. It automates the lifecycle of containerized applications on modern infrastructures, functioning as a “datacenter operating system” that manages applications across a distributed system. Kubernetes schedules containers across nodes in a cluster, bundling several infrastructure resources such as load balancer, persistent storage, etc. to run containerized applications. Kubernetes enables automation and extensibility, allowing users to deploy applications declaratively (see below) in a reproducible way...

Loosely Coupled Architecture

Loosely coupled architecture is an architectural style where the individual components of an application are built independently from one another (the opposite paradigm of tightly coupled architectures). Each component, sometimes referred to as a microservice, is built to perform a specific function in a way that can be used by any number of other services. This pattern is generally slower to implement than tightly coupled architecture but has a number of benefits, particularly as applications scale...

Microservices Architecture

A microservices architecture is an architectural approach that breaks applications into individual independent (micro)services, with each service focused on a specific functionality. These services work together closely, appearing to the end user as a single entity. Take Netflix as an example. Its interface allows you to access, search, and preview videos. These capabilities are likely powered by smaller services that each handle one functionality, e.g., authentication, search, and running previews in your browser...

Monolithic Apps

A monolithic application contains all functionality in a single deployable program. This is often the simplest and easiest place to start when making an application. However, once the application grows in complexity, monoliths can become hard to maintain. With more developers working on the same codebase, the likelihood of conflicting changes and the need for interpersonal communication between developers increases. Problem it Addresses Devolving an application into microservices increases its operational overhead — there are more things to test, deploy, and keep running...

Multitenancy

Multitenancy (or multi-tenancy) refers to a single software installation that serves multiple tenants. A tenant is a user, application, or a group of users/applications that utilize the software to operate on their own data set. These tenants don’t share data (unless explicitly instructed by the owner) and may not even be aware of one another. A tenant can be as small as one independent user with a single login ID — think personal productivity software — or as large as an entire corporation with thousands of login IDs, each with its own privileges yet interrelated in multiple ways...

Mutual Transport Layer Security (mTLS)

Mutual TLS (mTLS) is a technique used to authenticate and encrypt messages sent between two services. Mutual TLS is the standard Transport Layer Security (TLS) protocol but, instead of validating the identity of just one connection, both sides are validated. Problem it addresses Microservices communicate over a network and, just like your wifi network, communication in transit over that network can be hacked. mTLS ensures that no unauthorized party can listen in on or impersonate legitimate requests...

Nodes

A node is a computer that works in concert with other computers, or nodes, to accomplish a common task. Take your laptop, modem, and printer, for example. They are all connected over your wifi network communicating and collaborating, each representing one node. In cloud computing, a node can be a physical computer, a virtual computer, referred to as a VM, or even a container. Problem it addresses While an application could (and many do) run on one single machine, there are some risks involved with that...

Observability

Observability is a system property that defines the degree to which the system can generate actionable insights. It allows users to understand a system’s state from these external outputs and take (corrective) action. Computer systems are measured by observing low-level signals such as CPU time, memory, disk space, and higher-level and business signals, including API response times, errors, transactions per second, etc. These observable systems are observed (or monitored) through specialized tools, so-called observability tools...

Pod

Within a Kubernetes environment, a pod acts as the most basic deployable unit. It represents an essential building block for deploying and managing containerized applications. Each pod contains a single application instance and can hold one or more containers. Kubernetes manages pods as part of a larger deployment and can scale pods vertically or horizontally as needed. Problem it addresses While containers generally act as independent units that run and control a particular workload, there are cases when containers need to interact and be controlled in a tightly coupled manner...

Policy as Code (PaC)

Policy as Code is the practice of storing the definition of policies as one or more files in machine-readable and processable form. This replaces the traditional model where policies are documented in human-readable form in separate documents. Problem it addresses Building applications and infrastructures are often constrained by many policies that an organization defines, e.g. security policies that forbid storing secrets in source code, running a container with superuser permissions, or storing some data outside a specific geo region...

Portability

A software characteristic, portability is a form of reusability that helps to avoid “lock-in” to certain operating environments, e.g. cloud providers, operating systems or vendors. Traditionally, software is often built for specific environments (e.g. AWS or Linux). Portable software, on the other hand, works in different operating environments without needing major rework. An application is considered portable if the effort required to adapt it to a new environment is within reasonable limits...

Reliability

From a cloud native perspective, reliability refers to how well a system responds to failures. If we have a distributed system that keeps working as infrastructure changes and individual components fail, it is reliable. On the other hand, if it fails easily and operators need to intervene manually to keep it running, it is unreliable. The goal of cloud native applications is to build inherently reliable systems...

Role-Based Access Control (RBAC)

Role-based access control (RBAC) is a security method of managing user access to systems, networks, or resources based on their role within a team or a larger organization. RBAC empowers IT administrators to identify the necessary level of access for all users with a particular job function and assign those users a role with a predefined set of permissions. Organizations utilize RBAC to provide their employees with varying levels of access tailored to their roles and responsibilities...

Runtime

A runtime, in general, executes a piece of software. It is an abstraction of the underlying operating system that translates the program’s commands into respective actions for the operating system. In the context of cloud native, runtime generally refers to container runtime. A container runtime specifically implements the Open Container Initiative specification to ensure consistent handling around different container orchestration technologies. Problem it addresses Without the abstraction of a container runtime, the application would have to deal with all the mechanics of each operating system, increasing the complexity of running the app...

Scalability

Scalability refers to how well a system can grow. That is increasing the ability to do whatever the system is supposed to do. For example, a Kubernetes cluster scales by increasing or reducing the number of containerized apps, but that scalability depends on several factors. How many nodes does it have, how many containers can each node handle, and how many records and operations can the control plane support? A scalable system makes it easy to add more capacity...

Security Chaos Engineering

Security Chaos Engineering or SCE is a discipline based on Chaos Engineering. SCE performs proactive security experimentation on a distributed system to build confidence in the system’s capability to withstand turbulent and malicious conditions. Security chaos engineers use scientific method loops to achieve this, including steady-state, hypothesis, continuous verification, lesson learned, and mitigation implementation. Problem it addresses The main priority for site reliability engineers (SREs) and cyber security engineers is to restore service as fast as possible with the goal of achieving zero downtime and minimizing business impact...

Self Healing

A self-healing system is capable of recovering from certain types of failure without any human intervention. It has a “convergence” or “control” loop that actively looks at the system’s actual state and compares it to the state that the operators initially desired. If there is a difference (e.g., fewer application instances are running than desired), it will take corrective action (e.g., start new instances)...

Serverless

Serverless Computing abstracts servers away from the user. Operational management falls to the service provider, including handling physical machines and VM provisioning. Service providers can be public cloud entities or internal IT departments serving their development teams. These providers offer user interfaces such as SDKs, CLIs, or OCI-compliant runtimes, focusing on code and deployment tasks. Charges are based on a pay-per-use model. Scaling and resource provisioning for computing, storage, or networking are automatically adjusted based on application demand without user intervention...

Service

Please note that in IT, service has multiple meanings. In this definition, we’ll focus on the more traditional one: service as in microservice. How or even if services differ from microservices is nuanced and different people may have different opinions. For a high-level definition, we’ll treat them as the same. Please refer to the microservices definition...

Service Discovery

Service discovery is the process of finding individual instances that make up a service. A service discovery tool keeps track of the various nodes or endpoints that make up a service. Problem it addresses Cloud native architectures are dynamic and fluid, meaning they are constantly changing. A containerized app will likely end up starting and stopping multiple times in its lifetime. Each time that happens, it will have a new address and any app that wants to find it needs a tool to provide the new location information...

Service Mesh

In a microservices world, apps are broken down into multiple smaller services that communicate over a network. Just like your wifi network, computer networks are intrinsically unreliable, hackable, and often slow. Service meshes address this new set of challenges by managing traffic (i.e., communication) between services and adding reliability, observability, and security features uniformly across all services. Problem it addresses Having moved to a microservices architecture, engineers are now dealing with hundreds, possibly even thousands of individual services, all needing to communicate...

Service Proxy

A service proxy intercepts traffic to or from a given service, applies some logic to it, then forwards that traffic to another service. It essentially acts as a “go-between” that collects information about network traffic and/or applies rules to it. Problem it addresses To keep track of service to service communication (aka network traffic) and potentially transform or redirect it, we need to collect data. Traditionally, the code enabling data collection and network traffic management was embedded within each application...

Shift Left

Left in Shift Left refers to earlier stages in a software development lifecycle, thinking of the lifecycle as a line where stages are executed from left to right. Shift Left is the practice of implementing tests, security, or other development practices early in the software development lifecycle rather than towards the end. Although originally used to refer to the process of testing early, Shift Left can now also be applied to other aspects of software development and DevOps, such as security and deployment...

Site Reliability Engineering

Site Reliability Engineering or SRE is a discipline that combines operations and software engineering. The latter is applied to infrastructure and operations problems, specifically. Meaning, instead of building product features, Site Reliability Engineers build systems to run applications. There are similarities with DevOps, but while DevOps focuses on getting code to production, SRE ensures that code running in production works properly. Problem it addresses Ensuring applications run reliably requires multiple capabilities, from performance monitoring, alerting, debugging to troubleshooting...

Stateful Apps

When we speak of stateful (and stateless) apps, state refers to any data the app needs to store to function as designed. Any kind of online shop that remembers your cart is a stateful app for example. Today, most applications we use are at least partly stateful. In cloud native environments though, stateful apps are a challenge. This is because cloud native apps are very dynamic. They can be scaled up and down, restarted, and moved around but still need to be able to access their state...

Stateless Apps

Stateless applications handle each request independently without remembering any previous interactions or user data. Data from previous interactions is referred to as state since that data isn’t stored anywhere, these apps are stateless. Here’s an example: When you use a search engine, and that search is interrupted (e.g., the window is closed), those search results are lost. You’ll need to start all over. On the other hand, applications that process requests while considering previous interactions are called stateful applications...

Style Guide

This style guide will help you understand the Glossary audience, definition structure, required level of detail, and how to keep a consistent style. The Cloud Native Glossary follows the default style guide of the CNCF repository. Additionally, it follows the following rules: Use simple, accessible language, avoiding technical jargon and buzzwords Avoid colloquial language Use literal and concrete language Omit contractions Use passive voice sparingly Aim to phrase statements in a positive form No exclamation marks outside of quotations Do not exaggerate Avoid repetition Be concise Audience The Glossary is written for technical and non-technical audiences...

Tightly Coupled Architecture

Tightly coupled architecture is an architectural style where a number of application components are interdependent (the opposite paradigm of loosely coupled architectures). This means that a change in one component will likely impact other components. It is generally easier to implement than more loosely coupled architectural styles, but can leave a system more vulnerable to cascading failures. They also tend to require coordinated rollouts of components which can become a drag on developer productivity...

Transport Layer Security (TLS)

Transport Layer Security (TLS) is a protocol designed to provide increased security to communication over a network. It ensures the secure delivery of data sent over the Internet, avoiding possible monitoring and/or alteration of the data. This protocol is widely used in applications such as messaging, e-mail, etc. Problem it addresses Without TLS, sensitive information such as browsing habits, e-mail correspondence, online chats, and conferencing calls can easily be traced and modified by others during the transmission...

Vertical Scaling

Vertical scaling, also known as “scaling up and down”, is a technique where a system’s capacity is increased by adding CPU and memory to individual nodes as the workload increases. Let’s say, you have a computer of 4GB RAM and want to increase its capacity to 16GB RAM, scaling it vertically means switching to a 16GB RAM system. (Please refer to horizontal scaling for a different scaling approach.) Problem it addresses As demand for an application grows beyond the current capacity of that application instance, we need to find a way to scale (add capacity to) the system...

Virtual Machine

A virtual machine (VM) is a computer and its operating system that is not bound to a particular piece of hardware. VMs rely on virtualization to carve a single physical computer into multiple virtual computers. That separation allows organizations and infrastructure providers to easily create and destroy VMs without impacting the underlying hardware. Problem it addresses When a bare metal machine is bound to a single operating system, how well the machine’s resources can be used is somewhat limited...

Virtualization

Virtualization, in the context of cloud native computing, refers to the process of taking a physical computer, sometimes called a server, and allowing it to run multiple isolated operating systems. Those isolated operating systems and their dedicated compute resources (CPU, memory, and network) are referred to as virtual machines or VMs. When we talk about a virtual machine, we’re talking about a software-defined computer. Something that looks and acts like a real computer but is sharing hardware with other virtual machines...

WebAssembly

WebAssembly (often abbreviated as Wasm) is a binary instruction format designed as a portable target for compiling high-level languages like C, C++, Rust, and others. It enables deployment on the web for client-side and server-side applications. It is a low-level bytecode format that can be executed in a virtual machine, typically integrated into web browsers. While initially developed for the web, Web Assembly is a Universal Runtime and sees applications in non-web environments such as IoT and edge devices...

Zero Trust Architecture

Zero trust architecture prescribes to an approach to the design and implementation of IT systems where trust is completely removed. The core principle being “never trust, always verify”, devices or systems themselves, whilst communicating to other components of a system, always verify themselves before doing so. In many networks today, within the corporate network, systems and devices inside may freely communicate with each other as they are within the trusted boundary of the corporate network perimeter...